Thursday, December 26, 2019

Impact of Public Debt Burden on Economic Growth of...

Impact of Public Debt Burden on Economic Growth of Bangladesh: A VAR Approach Md. Hashibul Hassan Lecturer Department of Finance Jagannath University Dhaka, Bangladesh. Email: hashibulhassan@yahoo.com Tahmina Akhter Lecturer Department of Finance University of Dhaka Dhaka, Bangladesh. Email: tahmina25@gmail.com Impact of Public Debt Burden on Economic Growth of Bangladesh: A VAR Approach Abstract Bangladesh is relying heavily on public debt to meet the budget deficit since its independence. In this paper, the objective is to find out whether the government of Bangladesh is excessively borrowing from the public sources and thus negatively affecting the economy of the country. For this purpose GDP growth rate (GDP), manufacturing sector†¦show more content†¦However in Bangladesh very few studies have been done using the Vector Auto-regressive model, to identify the impact of public debt burden on the economic growth of the country. Fosu (1996) investigated the debt overhang hypothesis by studying 13 severely indebted countries- Zambia, Venezuela, Sierra Leone, Philippines, Peru, Morocco, Mexico, Kenya, Honduras, Egypt, Ivory Coast, Argentina and Algeria. The sample period was 1971 to 1991 and the author used OLS estimation method for panel data. The author found the negative and robust relationship between investment and external debt. Qureshi amp; Ali (2010) analyzed the impact of high public debt burden on the economy of Pakistan. The sample of the study was 1981 to 2008. From their study a vast negative impact of public debt on the economy of Pakistan had been found by the authors. Ahmed amp; Shakur( 2011) performed a research to highlight the problems created by the debt (external debt) to economic growth of Pakistan. They have used the unit root test and Johansen co-integration to analyze time series data from FY 1981 to FY 2008. The Granger Causality Vector Error Correction (GCVEC) method proved unidirectional relationship between external debt and growth rate of GDP per capita. Wijeweera, Dollery amp; Pathberya (2005), investigated the connections between external debt servicing and economic growth in Srilanka during 1952-2002 by using co-integration methodology for the long run errorShow MoreRelatedSummer Internship Report on Mutual Fund : Performance Evolution Marketing20554 Words   |  83 PagesIndia in a small way with the UTI Act creating what was effectively a small savings division within the RBI. Over a period of 25 years this grew fairly successfully and gave investors a good return, and therefore in 1989, as the next logical step, public sector banks and financial institutions were allowed to float mutual funds and their success emboldened the government to allow the private sector to foray into this area. The advantages of mutual fund are professional management, diversification

Wednesday, December 18, 2019

Native American Genocide Essays - 1362 Words

b. causing serious bodily or mental harm to members of the group;brc. deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;brd. imposing measures intended to prevent births within the group;bre. forcibly transferring children of the group to another group.br(Destexhe).brbrIn this paper, I will argue that the act of genocide as here defined, has been committed by the United States of America, upon the tribes and cultures of Native Americans, through mass indoctrination of its youths. Primary support will be drawn from Jorge Noriegas work, American Indian Education in the United States. The paper will then culminate with my personal views on the subject,†¦show more content†¦In 1820, the United States made plans for a large scale system of boarding and day schools Noriega, 377). These schools were given the mission to, instruct its students in letters, labor and mechanical arts, and morals and Christiani ty; training many Indian leaders Noriega, 378). In the case of boarding schools, Native American children would be forcibly stripped from their homes as early as five years old. They would then live sequestered from their families and cultures until the age of seventeen or eighteen (Noriega, 381). brbrIn 1886, it was decided, by the United States federal government that Native American tribal groups would no longer be treated as indigenous national governments. The decision was made, not by the conjoint efforts of the Native American tribes and Congress; but, by the powers that be the United States Legal System. This self-ordained power allowed Congress to pass a variety of other laws, directed towards, assimilating, Native Americans, so that they would become a part of mainstream white America (Robbins, 90)brbrBy this time the United States Government, had been funding over a dozen distinct agencies, to provide mandatory education to all native children aged six through sixteen. En rollment was enforced through leverage given by the 1887 General Allotment Act, which made Natives dependent on the Government forShow MoreRelatedThe Native American Genocide2545 Words   |  11 PagesRaquel Medina Professor Gomez Eng. 101 5/4/16 The Native American Genocide As one begins to compare genocides and holocausts, it is hard to remain unbiased. Of Course there are dissimilarities, mostly semantic, between these two horrendous acts. Regardless, the fact is that both these words are used to explain the immense killings done with the objective of annihilating an entire race of people .Holocausts and Genocides are disgusting both in its drive and the scale of their destruction. Both shouldRead MoreThe Genocide Of The Native Americans1516 Words   |  7 PagesCristina Savaglio Prof. Di Lorenzo History 203 24 November 2014 The Genocide of the Native Americans Early European colonization of the Americas was initially marked by both exchange and conflict. When the English colonists arrived in the Americas most Indian tribes welcomed them. Many Indians believed the settling colonists would assist in protecting their tribe from other powerful tribes in the area, because the colonists had access to weapons. In exchange for this added protection, the IndiansRead MoreNative American Genocide Essay1057 Words   |  5 PagesNative American Genocide project Essential Question: Why do terrible things happen and what can be done about it? The genocide I want to research is the Native American. I would like to research this because I have always been interested in Natives/Indians since I was little and I want to know how cruel it was for the Natives and why the Americans wanted to kill them and use them as slaves. Many people have said that the natives have had the worst genocide of them all. The Natives Culture,Read More Holocaust vs. Native American Genocide Essay961 Words   |  4 Pages The term genocide brings awful things to mind. For most, it probably directs their attention towards the Holocaust; this was definitely a gruesome and obvious example of genocide, but there are many others with great similarities that are not very well known. One of these is the decimation of the Native American population by the European settlers and the atrocious things that were done to them such as the trail of tears following the Indian Removal Act of 1830 during the settling of North AmericaRead MoreThe Trail Of Tears : A Story Of The Native American Genocide Essay1778 Words   |  8 PagesThe Trail of Tears; A Story of the Native American Genocide In 1492 native Americans discovered illegal immigrants invading their country. It has been a downhill fight for natives ever since. As more settlers arrived on the East Coast, an attitude became prevalent within the European communities that it was their right to expand cities across America in the name of progress and economic development. The manifest destiny was more of a feeling rather than a written statement which lasted from theRead MoreManifest Destiny and the Genocide of the Native American Indian1366 Words   |  6 Pagesideal affect the Native Americans in the 1830’s? II. Methods a. Research about the origins of Manifest Destiny and the history of the Native Americans from 1830 to 1839. There were two websites that we particularly helpful to me. Reliability, how recently it was updated and how easily it could be edited by Internet users were the main criteria used when selecting a website. b. Writing a rough bibliography y about the topic c. Selection and reading of books pertaining to Native Americas, and ManifestRead MoreNative American Genocide5146 Words   |  21 PagesWas U.S. Policy Toward Native Americans During the Periods of Expansion, Colonization, and Early U.S. an act of Genocide ? â€Å"To conquer a nation, one must first disarm its citizens.† - Adolf Hitler, 1933 Abiona Yemane US History Ms.Brown Section F Independent Research Project 4 June 2014 Introduction In August of 1492 Columbus set sail from Spain hoping to soon arrive in Asia, but a few months later he arrived in the Bahamas and claimed it as new land. HeRead MoreProfiling And The Genocide Of The Native Americans1976 Words   |  8 Pageswill discuss what profiling is how long it’s been around on earth and if it’s a thing were born with or obtained. It will also bring up examples of genocides, and mass hate throughout the various civilizations around the world. Events like the holocaust and the Pearl Harbor bombing aftermath. Also segregation and the genocide of the Native Americans by Christopher Columbus. We will see how diversity can drive people mindless and irrational. We will also see how people react when their safety is inRead MoreThe American Holocaust : The Conquest Of The New World1183 Words   |  5 PagesThe American Holocaust – The Conquest of the New World The book I chose from the reading list of nonfiction books was The American Holocaust – The Conquest of the New World by David E. Stannard. In this nonfiction book, David E. Stannard describes in horrifying detail, the destruction and holocaust of nearly all early American societies that resulted from the European contact with the Western Hemisphere. I did not choose this book for any specific reason, but I thought it would be an interestingRead MoreHow is the extinguishing of the Jewish and Native American races similar?1743 Words   |  7 PagesI will be researching extinguishing of the Jewish and Native American races; the reasoning behind the atrocities, the suffering, and the aftermath. Both groups of people were stripped of their rights. The Native Americans were simply denied their rights and in Germany, during World War II, the Jewish population’s rights were taken away. The plight of the Native American expanded over a longer time period, but there race was practically eradicated. The systematic state-sponsored murder of six million

Tuesday, December 10, 2019

Process Approach Model

Question: Discuss about theProcess Approach Model. Answer: Introduction: In this assignment, an analysis tool has been examined that can help an organization to get competitive advantage. The analysis tool is named as the process approach. Three major components of this model are like management process, realization process and support process. The intend of this assignment is to develop in depth understanding about the process approach model to ISO 9000 family of the Quality management standard of system. This assignment will also act like guidance for the application of this model to any management system irrespective of the size and type of the company. As Weske (2012) stated that, along with management system there various other fields in which this model can be applied. They are like business risk management, social responsibility, Occupational health and safety, and environment. In this assignment, at first the process approach model has been analyzed and then it has been discussed how Woolworths can apply this model in its management to improve its business operation. What are the Essentials of the Process Approach Model? According to Post and Preston (2012), a process is a Set of interrelated or interacting activities, which transforms inputs into outputs. According to the sub clause 0.2 of ISO 9001:2008, Process approach is The application of a system of processes within an organization, together with the identification and interactions of these processes, and their management to produce the desired outcome, can be referred to as the process approach (Chase 2012). It has been found that while implementing this approach, a company has to define the types and number of processes that is required to be fulfilled for its business objectives. However, some typical processes have been identified. They are such as management processes, realization processes, support processes. They are mentioned below: Management Processes According to Huczynski (2012), management process is related with development of organization policies, strategic planning, establishing business objectives, and development of effective communication. It also helps to ensure the availability of resources so that a company can fulfill the objectives and get positive outcome. Realization Processes As Hitt et al. (2012) mentioned that, it includes all processes, which provide the desired outcomes for a company. They are such as development of appropriate business structure, proper communication system and appropriate promotional activities. Support Processes According to Lee et al. (2012), the Support process is also known as analysis, measurement and improvement process. Weske (2012) stated that support process includes the processes needed to measure and gather data for performance analysis and improvement of effectiveness and efficiency. They include measuring, monitoring, auditing, performance analysis and improvement processes (for corrective and preventive actions). Measurement processes are often documented as an integral part of the management, resource and realization processes; whereas analysis and improvement processes are treated frequently as autonomous processes that interact with other processes, receive inputs from measurement results, and send outputs for the improvement of those processes. According to Weske (2012), performance of a company can be enhanced with the help of process approach model. In this model, all processes are managed as a system. The process networks and their interactions can define this system. Hence, this model can create a better understanding about the added value. Huczynski (2012) mentioned that, The consistent operation of this network is often referred to as the system approach to management. Often it has been found that, all the processes of an organization are interrelated. Output of one process can be input to other process. The concept and the use of this model for the management system of an organization has been given below: Figure 1: Process approach model and its implication (Source: Scheer 2012) The purpose of implementing process approach is to improve effectiveness and efficiency of an organization to achieve the defined objectives. In accordance to ISO 9001:2008, it means enhancing customer satisfaction by meeting their all requirements (Chase 2012). The benefit of using this approach is that, it can provide predictable, consistent and improved outcomes. (Kapferer 2012) stated that, this model can align and integrate all the processes of an organization to achieve desired outcomes. How does Woolworths use this model? Company Woolworths can use the following methodology to apply any kind of process within the organization. They are mentioned below: Step 1 Process identification Step 2 Planning of the process Step 3 Implementation and measurement of the process Step 4 Analyzing the process Step 5 Improvement of the process Figure 2: Stages of Process approach model (Source: Lee et al. 2012) Process Identification In order to conduct the first step, the management of Woolworths has to follow a number stage. These stages are described below: Figure 3: Stages related with Process identification (Source: Post and Preston 2012) Woolworths has to determine which processes needs to be documented depending on the type and size of the company, complexity and criticality of the process and its interactions and availability of qualified personnel. Planning of the Process In this stage, the management of the company needs to define the interaction and sequences of activities within the process. After that, the management of the organization has to measure and monitor the requirement of the process. As Lee et al. (2012) stated that, it includes factors like wastes, on time delivery, customer satisfaction, performance of suppliers. Implementation and Measurement of the Process Woolworths can implement this model for activities such as change management, communication, training, management involvement and developing awareness among employees (Post and Preston 2012). Analyzing the Process In order to quantify the performance of the process, the management of the organization needs to evaluate and analyze the process information obtained from measuring and monitoring data. The management of the company has to compare the performance of the process with process requirements in order to confirm its effectiveness. Improvement of the Process The management of the organization needs to apply Risk analysis tools in order to analyze the potential problems related with the implementation of the model. The management has to identify the root causes of the problems and eliminate them. They can use Plan-Do-Check-Act (PDCA) methodology to define, implement and control the corrective actions. Conclusion In this assignment, the importance of process approach model has been discussed. Different processes related with this model have been analyzed. Then it has been discussed how the management of the company Woolworths can implement this model within the organization. References Chase, J., 2012.Operations management. Tata McGraw-Hill. Hitt, M.A., Ireland, R.D. and Hoskisson, R.E., 2012.Strategic management cases: competitiveness and globalization. Cengage Learning. Huczynski, A., 2012.Management gurus. Routledge. Kapferer, J.N., 2012.The new strategic brand management: Advanced insights and strategic thinking. Kogan page publishers. Lee, H., Kim, M.S. and Park, Y., 2012. An analytic network process approach to operationalization of five forces model.Applied Mathematical Modelling,36(4), pp.1783-1795. Post, J. and Preston, L., 2012.Private management and public policy: The principle of public responsibility. Stanford University Press. Scheer, A.W., 2012.Business process engineering: reference models for industrial enterprises. Springer Science Business Media. Weske, M., 2012. Business process management architectures. InBusiness Process Management(pp. 333-371). Springer Berlin Heidelberg.

Monday, December 2, 2019

The history of radiation therapy machines Essay Example

The history of radiation therapy machines Essay Prior to the advent of ionizing particle beams, medicine had few options for treating some malignant and benign diseases. Physicians needs for new techniques to address these problems formed a vacuum, clearly demonstrated immediately following the discovery of X-rays in November 1895. By the first few months of 1896, X-rays were being used to treat skin lesions prior to any understanding of the beams physical or biological characteristics. The driving force was, of course, patients overwhelming need of treatment for uncontrollable and debilitating diseases. Radiation medicine developed over four major eras: the era of discovery, from R intents discovery to about the late sass; the orthoclase era, from the late o sass through World War II; the megavolt era, which began with higher-energy lanais for therapy in the sass, and, with refinements such as intensity-modulated X- ray therapy (IMMIX), is still ongoing. Within this scheme, the roots of BIT fall into the third or megavolt phase, with the first treatment of humans in 1954. J. M. Slater Department of Radiation Medicine, Loam Linda University Medical Center, 11234 We will write a custom essay sample on The history of radiation therapy machines specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The history of radiation therapy machines specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The history of radiation therapy machines specifically for you FOR ONLY $16.38 $13.9/page Hire Writer Anderson street, SSP A-OHIO, Loran Linda, CA 92354, USA e-mail: [emailprotected] Lump. Du U. Line (deed. ), Ion Beam Therapy, Biological and Medical Physics, Biomedical Engagement, DOI 10. 1007/978-3-642-21414-1 1, Springer-Average Berlin Heidelberg 2012 3 4 These eras represent a continuum rather than a succession of distinct periods, but are a convenient way to assess the evolution of ART and BIT as a sophisticated part of it. In each era, the fundamental impetus for improvements came from patients needs for effective disease control while retaining or improving quality of life. These needs aroused the curiosity of physicians, physicists, and biologists, who, in their own ways in each of the eras, performed studies aimed at better understanding the tools they were working with and learning how to use them optimally for patients benefit. A kind of teamwork occurred in all of the eras, although often no formal teams existed; an overarching goal better patient treatment guided the efforts. The development of ion beams is part of this process. 1. 1. 1 The Discovery Era During this period of 30-35 years, the roots of ART were established. This era saw the coverer of the atom and various subatomic and electromagnetic particles; investigators strove to learn how to use them therapeutically. The salient discovery was R intents in 1895 [1], although X-rays were produced o earlier if unwittingly by others [2]. His report was followed soon by Becquerels on the phenomenon of radioactivity [3] and, in 1898, by that of the Curies on the discovery of radium [4]. Becquerel and Curie reported on the physiologic effects of radium rays in 1901 [5]. Such discoveries stimulated speculation that radioactivity could be used to treat disease [6]; indeed, X-rays were used to treat a patient with areas cancer in January 1896 [7]. By 1904, ART texts were available [8, 9]; reports of the use of X-rays and radium (characterize) occurred throughout the first decade of the twentieth century. In retrospect, it is clear that lack of knowledge of the biological effects and mechanisms of actions of the new rays led to much morbidity and poor cancer control [10]. However, such outcomes led physicians to ponder better modes of delivery; radiologists to study the effects of the rays on cells; and physicists to investigate the properties of these newly discovered radiations. Physics research led o the discovery of radioactive isotopes, which later were used for interactivity and interstitial therapy; the same research led ultimately to an understanding of the structure of the atom. As the era progressed, biologists began to understand the relationship between time and dose on cell survival. A crucial discovery occurred when Regard [1 1] and Couture [12] studied alternative ways of delivering the total radiation dose. Until that time, treatment was generally administered in one or a few large doses. Regard demonstrated that fractionated therapy would eradicate cooperativeness rearmament; Couture later showed that applying external beam therapy similarly could control head and neck cancer without the severe reactions and late effects that single large doses caused. These findings established that normal cells are better 1 From X-Rays to Ion Beams: A Short History of Radiation Therapy able to recover from radiation injury than cancer cells and led radiation therapists to employ dose fractionation. During this era also, Coolidge developed a practical X-ray tube, allowing physicians to deliver higher-energy X-rays (180-200 xv) to deeper tumors [13]. Until then, X-rays ere used mainly to treat superficial tumors. High-voltage transformers were also developed. Subsequently, physicists and engineers developed techniques to better measure the dose of radiation with X-rays. The path to charged-particle therapy begins with Ernest Rutherford, whose work spurred understanding of atomic structure. Rutherford explained radioactivity as the spontaneous disintegration of atoms; he helped determine the structure of the atom; and he was the first to note that one element could be converted to another. A complete bibliography of Rutherford works is available online, as part of a impressive site devoted to him [14]. The reader is referred to that source for publications relating to discoveries noted herein. In 1896, Rutherford began to use X-rays to initiate electrical conduction in gases; he repeated the study with rays from radioactive atoms after Becquerels discovery. In 1898, he discovered that two separate types of emissions came from radioactive atoms; he named them alpha and beta rays, the latter of which were shown to be electrons. He showed that some heavy atoms decay into lighter atoms, and in 1907 demonstrated that the alpha particle is a helium atom stripped of its electrons. He and Geiger developed a method to detect single particles emitted by radioactive atoms. He investigated whether alpha particles were reflected from metals, discovering that some alpha rays were scattered directly backward from a thin film of gold; a massive yet minute entity, the atomic nucleus, turned back some alpha particles. In 1911, Rutherford proposed the nuclear model of the atom. One of his students, Nielsen Boor, placed the electrons in stable formation around the atomic nucleus; the Rutherford-Boor model of the atom, with later modifications, became tankard, and Rutherford scattering is still used today in basic and applied research. Wilhelm Wine, in 1898, had identified a positively charged particle equal in mass to the hydrogen atom. In 1919, Rutherford demonstrated that nitrogen under alphabetical bombardment ejected what appeared to be nuclei of hydrogen; a year later, he equated the hydrogen nucleus with the charged entity that Wine had discovered. He named it the proton. The discovery of X-rays, then gamma rays, then the structure of the atom with electrons, protons, and neutrons marked the first era. It was one of physical and illogical experimentation to determine and understand the characteristics of the newly discovered beam and the effects of such rays on cells and tissues. Especially following the work of Rutherford, radioactive elements were also identified and diligently studied, as well. As treatment began with these new types of radiation prior to adequate knowledge of their characteristics and effects, errors were made and patients were injured. However, as knowledge and understanding increased during this era, two major divisions of radiation medicine diagnosis and therapy were developing; reared, some of them successfully. 1. 1. 2 The Orthoclase Era The period from roughly the late sass to 1950 encompasses this era. Patients needs for treatment of deep tumors were addressed largely by radium-based interactivity and interstitial irradiation, in the absence of deeply penetrating external beam sources. It was also a transitional period: physical developments that led t o superlative (apron. 500 xv-2 NV) ART were being made [15]. During the sass, advances in physics and engineering led to increased understanding of subatomic particles and techniques for energize and focusing them. The first superlative X-ray tubes, built by Coolidge [16], were the basis of the linear accelerator, developed by Widere in 1927 and described in a German Journal in 1928. E. O. Lawrence, despite knowing little German, used Widerex.s equations and drawings to conceptualize the cyclotron [17]. By the late sass, particle accelerators began to be constructed. Following the invention of the linear accelerator, devices operating on the principle of applying a potential difference were developed by Van De Graff in 1929 [18] and by Cockcrows and Walton in 1932 [19, 20]. The cyclotron, also based on the principle of applying a difference in potential, was invented in 1930 by Lawrence and Livingston [21]. At Lawrence laboratories at the University of California, Berkeley, accelerated particles were used to bombard atoms of various elements, forming, in some cases, new elements. Lawrence brother, John, a physician, along with Robert Stone, pioneered neutron radiation for medical treatments [22]. Electron beam therapy became a practical and useful therapeutic option in 1940, when Serest developed the beetroot [23, 24]. The first machine produced 2 Move electrons; later devices yielded up to 300 Move. Medical research in particle therapy was largely sidelined during World War II, but high-energy physics investigations were spurred, notably in the effort to develop an atomic bomb. Some who worked on it, notably Robert R. Wilson, became instrumental in the development of 1ST. One major advance during this period was the synchrotron, conceived independently and at about the same time (1944-1945) by Vessels in the Soviet Union and McMillan in the United States. McMillan gave priority to Vessels [25]. The central concept was phase stability, by which high energies could be achieved without the need to build ever larger cyclotrons. Phase stability became the basis for all heightening proton and electron accelerators thereafter. More importantly for medical use, the synchrotron made it easier to vary the energy of acceleration and thus the depth of penetration in tissue needed for optimal radiation treatments. The first, the Commotion at Brookhaven National Laboratory, began operation in 1952 [17]. 1. 1. 3 Megavolt Era noted, in some respects it is still in progress. A major advance, in response to 7 the continuing need to treat tumors located in deep tissues, was the development of ball telegraphy machines and megavolt linear electron accelerators. Cobalt telegraphy was capable of producing beams equivalent to approximately 1. 3 NV X- rays. Electron lanais began to become clinically available as early as the mid sass [26], but widespread application occurred in the sass and sass. Their higher energies (4-6 Move in earlier machines; 10-20 Move in later units) made possible increased depth of penetration, greater skin sparing, and improved disease-control rates, which often doubled or tripled, through delivery of higher doses [27, 28]. There as still a major limitation, however, because the radiation sources, X-rays or gamma rays (cobalt), were difficult to control as they passed through tissue: they scattered laterally and passed beyond their targets, exiting patients opposite the point of entry and causing excessive radiation in normal tissues surrounding the tumors. To overcome this, radiation oncologists and medical physicists developed multiplied treatment plans to spread unwanted radiation to larger volumes of normal tissue, thereby reducing the high dose to any one region. This tactic helped to reduce visible effects, but also increased the total dose delivered to normal tissues (volume integral dose). Doses sufficient to control many tumors were still unattainable because of continued acute complications and late effects caused by injury to normal tissues. During this era, radiation medicine advanced as a discipline. Well-designed clinical studies demonstrated the efficacy of modern methods of delivering ART. One of the earliest was done by Gilbert Fletcher at the University of Texas M. D. Anderson Hospital; it demonstrated clearly that megavolt treatment resulted in improved survival in cancer of the uterine cervix [29]. The founding of the American Society for Therapeutic Radiologists (ASTRAY) in 1966 (originally the American Club of Therapeutic Radiologists, founded in 1958) occurred partly as a means of encouraging careful studies such as those done by Fletcher. As time progressed, radiation therapists began to emphasize themselves primarily as radiation oncologists; in 1983, the organization became the American Society for Therapeutic Radiology and Oncology (ASTOR) [30]. In many respects, the megavolt era is still in progress, although the development of higher-energy electron accelerators is quite mature. In recent years the emphasis in photon ART has been on conformal techniques, featuring computerized control and approaches such as IMMIX. The intent, as has been true throughout the megavolt era, is to deliver a more effective dose to the target volume while reducing the dose to tissues that do not need to be irradiated. One might think of it as the multipart approach brought to its logical conclusion; indeed, the approach was anticipated by rotational arc therapy, popular for a time in the sass and sass. IMMIX can conform the high dose to the target volume, but the modality employs a greater number of imposed of photons; their absorption characteristics in tissue remain unchanged. 8 1. 1. 4 The Era of Ion Beams The groundwork for BIT was laid in 1946 when Robert R. Wilson wrote the landmark paper in which he proposed that protons accelerated by machines such as Lawrence could be used for medical purposes as well as scientific investigations [31]. In a conversation with the author, Wilson said that his insight was inspired by the medical work that Lawrence and Stone had done at Berkeley. In the immediate postwar years, higher-energy accelerators were Just becoming available. Wilson seasoned that protons, among the charged particles, offered the longest range for a given energy and were then the simplest and most practical for medical use. Willows interest in the medical use of protons never ceased. When he was selected as first director of the National Accelerator Laboratory (later Ferreira), he encouraged the idea of a proton treatment facility. In 1972, Ferreira investigators proposed such a facility. However, physicians in the Chicago area advocated a neutron facility at the laboratory instead. After Wilson resigned the directorship in 1978, others at Ferreira, mongo them Miguel Shallow, Donald Young, and Philip Lividly, continued to believe in a patient-dedicated proton facility. The first clinical use of a proton beam occurred at Berkeley in 1954 [32]; limited investigation proton treatment lasted for a few years afterward, until Berkeley scientists, notably Cornelius A. Tibias, began investigating biologically similar helium ions. Tibias was a nuclear physicist who, early in his career, became interested in applying physics to biology and medicine. His fundamental research interest was on the effects of ionizing radiation on living ells, and he, like Wilson, foresaw the advantages of therapeutic ion beams long before most radiation oncologists did [33, 34]. Proton therapy (OPT) began to spread to other physics laboratories around the world. The second use of a physics research accelerator for OPT occurred in Pascal, Sweden in 1957. Physicians at MGM, led by a neurosurgeon, Raymond Goldberg, began employing protons in 1961 for neurological radiographers; pituitary adenoma were first so treated at Harvard in 1963 [35], followed by fractionated OPT for other malignant tumors in 1973 [36, 37], under the leadership of Herman D. Suit. Proton beam therapy began at Audubon, Russia (then USSR), in 1967; subsequently, other Russian facilities began operating at Moscow in 1969 and at SST. Petersburg in 1975. The Japanese experience began in 1979, at Chief; another facility opened at Tissues in 1983. At the Swiss Institute for Nuclear Research (now the Paul Scorcher Institute), OPT commenced 1985 [38]. The development of the worlds first hospital-based proton facility began in 1970 at LUMP with a feasibility study that revealed three major missing supportive developments that prevented optimal use of protons for patient treatments: imputer competence, digital imaging (computerized tomography scanning), and nomination pattern superimposed on the patients anatomy and thereby plan treatments with the precision necessary to realize the benefits from these well- controllable charged 9 Fig. . 1 Examples of data output from the computer-assisted treatment planning systems developed at LUMP in the sass. The image from the first (ultrasound) planning system, for a patient treated in 1973, is shown at left; a planning image from the second LUMP system, which employed CT scans, is shown at right for a patient treated in 1978. In addition to reproduction of the patients anatomy, the CT- base d system allowed assessment of density variations as the X-ray beams passed through tissue particle beams (CB. Chap. 34 for details). Industry provided sufficient computer competence and the needed imaging technology by the early sass. LUMP investigators began developing the concepts needed for computer-assisted radiation treatment planning in the late sass and completed the first unit, utilizing ultrasonically, in the early sass [39]. In the mid-sass, this was converted to a CT- based unit, using one of the first GE scanners developed (Fig. . 1). This system provided electron density data, which made possible placement of the Bragg peak precisely within the designated treatment volume [40]. Michael Gotten at MGM expanded the planning system to three-dimensional capabilities, thus providing excellent treatment-planning capabilities for heavy charged particles [41, 42]. The establishment of such planning systems provided one of the essential prerequisites for proton (and other heavy charged-particle) ART [43]. By 1984, all prerequisites for establishing optimal ion beam facilities for clinical use were in place. This was clearly agonized by some of the staff at Ferreira and at the MGM and LUMP departments of radiation medicine. The author approached the leadership of Ferreira, Deputy Director Philip V. Lividly and Director Leon M. Alderman, who agreed to provide Ferreira support for developing a conceptual design for such a clinical facility; to continue with development of an engineering design; and to produce the accelerator, beam transport, and beam delivery systems for LUMP to begin OPT clinical trials (Figs. 1. 2 and 1. 3). A major turning point in OPT, therefore, occurred in 1990, with the opening of he worlds first hospital-based proton treatment center at LUMP. This event occurred more than 20 years after the author and colleagues began to investigate and work toward developing such a facility [44, 45]. Protons were selected as the particle of choice at LUMP because the relatively low LET of protons as compared to that of heavier ions would allow selective 10 Fig. 1. 2 Leon Alderman, Ph. D. , Director of Ferreira from 1979 to 1989; recipient of the Nobel Prize for Physics in 1988. In 1986, Dry. Alderman approved Forelimbs collaboration with LUMP in developing the worlds first hospital-based proton reattempt center destruction of the invasive cancer cells growing among normal cells, as had been demonstrated for many years and documented by the worldwide data from using photons (X-rays). By this period, the ROBE was known to be very similar for the two kinds of radiation. Loam Linda investigators realized that optimal applications and accumulation of meaningful clinical data could be made only in a facility designed to support patient needs and to operate within a medical environment, with access to a large patient volume and the supporting services available in a medical center. To date, over 15,000 patients have been treated at LUMP. Protons were not the only particles investigated for therapy. In the sass and sass, some physicists and radiation biologists were enthusiastic about the therapeutic possibilities of negative pi-mesons and ions heavier than the hydrogen nucleus. It was then not a given in the minds of many that the particle employed most commonly would be the proton. Basing their suggestions on the pinion capture phenomenon, Fowler and Perkins proposed pi-mesons for clinical use [46]. Pinions were expected to become clinically ascribable [47], and trials were conducted at three centers: Los Alamos National Laboratory, the Paul Scorcher Institute in Switzerland, and TRITIUM, in British Columbia, Canada. Although some successful outcomes were reported [48-50], in general, the anticipated clinical outcomes did not materialize. 11 Fig. 1. 3 Two Ferreira personnel who helped make the hospital-based proton center at LUMP a reality. Philip Lividly (left) was Deputy Director of the laboratory in 1986, when the decision was made to proceed with the center. Lividly had been a colleague of Robert Wilson; he shared Willows commitment to proton therapy. Lee Tent, Ph. D. Right), shown with the Loam Linda proton synchrotron under construction in the late sass, was the chief designer of the accelerator Helium ion therapy was begun at Berkeley by Tibias and colleagues in 1957 [51]; some notable outcomes supervened [52-54]. Clinical studies with heavier ions were begun by Joseph R. Castro and associates in 1974 [55, 56]; Tibias elucidated the molecular and cellular radiology of the particles [57]. Advantages of heavy ions, though appealing theoretically, were not well-understood clinically; the Berk eley studies were undertaken partly to help develop this understanding. Several trials notably specialized indications such as bone sarcomas and bile duct carcinomas [58- 60]. However, the cost of developing and delivering heavy ions eventually could not be Justified by the relatively limited patient experience, as had been true in the pinion trials [61]. Studies of heavy ions shifted to Japan and Germany, under the leadership of such individuals as Horopito Tutsis at Chief and Gerhard Kraft at Dramatic. Today, several ion beam facilities operate around the world, including facilities in the United States, Japan, Germany, Russia, France, Canada, China, England, Italy, South Africa, South Korea, Sweden, and Switzerland. Most centers offer protons, but carbon ion therapy is available at HIM (Chief) and HIBACHI (Tattoos) in Japan, and at HIT (Heidelberg), in Germany. The two latter centers offer both protons and carbon ions [62]. Thousands have been treated to date with carbon ion therapy [63, 64], but Kickoff and Line note that systematic experimental studies to find the optimum ion have not yet been pursued [65]. They speculate that ions with atomic numbers greater than 6 are unlikely to undergo a clinical revival, but those with atomic numbers between 1 and 6 may be alternatives to carbon. 1. 2 Perspective The development of 18TH was a response to the need to preserve normal tissue as much as possible, so as to lessen the side effects and complications that often barred delivery of sufficient dose levels to control tumors, even in the mature megavolt era. Investigations by physicists and radiation biologists from the sass to the sass pointed to the supe riority of charged particles in comparison to photon and neutron beams. Both Wilson and Tibias told the author that they found it easier to explain and demonstrate the advantages of protons and other ions to fellow scientists than to physicians. As evidence mounted, however, some physicians recognized the physical attributes of ions and were able to understand how these attributes would translate into clinical advantages beneficial to patients. From the clinicians point of view, the advantages ultimately rested on the fact that ion beams are precisely controllable in three dimensions, while photon and neutron beams are less controllable in two dimensions and are uncontrollable in the third. The controllability of ion beams, in the hands of skillful physicians, provides a superior tool for cancer therapy and for dealing with difficult-to-treat benign sissies. Curing patients who have solid tumors requires controlling those tumors at their site or region of origin. Normal-tissue damage, whether occasioned by surgical trauma or effects of radiation or chemotherapy, restricts the ability to ablate malignant cells. Keeping the volume integral dose to normal tissues as low as possible is a fundamental issue in radiation medicine. Rubin and Creates demonstrated that there is no safe radiation dose, in terms of avoiding sequel in irradiated normal tissues [66]. Later, Rubin and colleagues noted a cascade of cytokines in mouse fibrosis [67]. Biological studies are now commonly finding other injury mechanisms. Research, therefore, is always ongoing to develop new techniques to overcome these imposed limitations of normal-cell damage. Proton and other charged-particle beams are one outcome of such research. Any radiation beam, regardless of the basic particle employed, can destroy any cancer cell or any living entity if the dose is high enough. Historically, therefore, the limiting factor in radiation medicine has been the normal cell and the need to avoid irradiating normal tissues, so as to permit normal-tissue repair and avoid reattempt-compromising side effects. This was the fundamental reason behind dose fractionation and multipart techniques. During the early years of radiation medicine, the major problem of practitioners was their inability to focus the invisible radiation beam precisely on the invisible tumor target. Improvements in imaging technologies, along with computer-assisted, CT-based radiation treatment planning, enabled radiation oncologists to deliver precision external-beam radiation treatments to any anatomic site. This advance was limited, however, because conformity with photon beams, which has reached a high degree 3 Fig. . 4 An example of improved controllability needed to spare normal tissues from unnecessary radiation. A 3-field proton plan (left) is compared with a 6-field IMMIX plan for treating a large liver cancer. Both modalities effect similar high-dose coverage of the clinical target volume (red outline), but the superior controllability of the proton beam enables the physician to avoid most of the normal liver tissue receiving low-dose irradiation in the IMMIX plan of precision with IMMIX, requires a trade-off: an increased normal-tissue volume integral dose. Ion beams forming a Bragg peak offer a means to achieve the needed increased conformity I. E. , sparing a greater volume of normal tissue (Fig. 1. 4) because of their charge and increased mass. Physicians using ion beams can now plan treatments to place the Bragg peak in targeted tissues and avoid unacceptable normal-tissue effects. Such capability is facilitated not only by precision therapy planning but also by precision positioning and alignment (CB. Chaps. 33 and 34). This creates a new focus for research and development in the upcoming era. Included in this era, one can expect studies on cell organelle effects with each particle and delivery technique used, and ultimately, biological dosimeter to be developed and merged with physical dosimeter for further improvements in treatment planning. We can also expect to use much more optical imaging fused with our more conventional imaging techniques to better understand the physiological attributes and biological effects of targeted cells and nearby normal cells following treatment. In future years, this increased understanding of cell physiology should help provide a more reasonable rationale for selecting the particle