Monday, December 30, 2019

The Importance Of The Communication Process Of Health Care...

Australia is a country that cultural pluralism is legally accepted and several linguistically diversified communities living as one nation. Therefore culturally competent and linguistically diversified health care workforce is a key factor to determine patient outcome, to enhance patient compliance and to reduce health disparities in addition to the quality of services and systems available in the country. Even though multiple definitions can be seen in literature review, the definition given by for Cultural and linguistic diversity (CALD) people is as English and non-English speaking communities from different cultures (Optus yes, 2015).These groups and individuals differ in their religion, race, language and ethnicity (Culturally and Linguistically diverse people, 2009).The effectiveness of the communication process of health care workforce with these multiple communities or individuals is decided by their cultural competency. Mrs Romano’s case study is a best example of a worse outcome of lack of competencies shown by health care workers in managing CALD individuals at clinical setup.it is obvious this enrolled nurse has not taken adequate communication strategies unique to this situation to maintain an effective communication with her Italian client who cannot speak English, but holding equal rights to obtain a similar patient care received by a English speaking client with similar disease. According to Sheldon (2004), communication in a clinical setup is a process ofShow MoreRelatedHealth Care Through The Eyes Of The Patient1194 Words   |  5 PagesHealth Care Through The Eyes of The Patient A particular emphasis on the different areas of nursing, such as patient-centered care has captured the interest of many. The need for patient-centered care has grown, in part, from the evolving medical atmosphere in the United States (Reynolds, 2009). Nurses and other health care providers no longer are solely in charge of care. Patients demand to be active partners in the health care process (Reynolds, 2009). In patient-centered care, the needs of theRead MoreHow Can Education Be A Solution For Increase Cultural Competency?1386 Words   |  6 Pagesfuture work in the health sector especially as there continues to be growth in the minority population. Cultural competencies must be meaningfully incorporated into the curriculum to show student for how important it is for their community to be able to help people of different ethnic backgrounds (Knox and Haupt, 2015). Cultural competency can be incorporated into education by having students learn through definitions, discussions, a nd training modules. Students in a dynamic process incorporate experiencesRead MoreBusiness Management And Hospitality Industry1370 Words   |  6 Pagescompetitive workforce, there are very few characteristics that separate a professional from the rest. The term professional is not only inclusive of the knowledge or the skill level a person possesses but also the attitude, conduct and the environment they create in the workplace. While, most of the employers are looking for professional with knowledge and skill in their field, but they also are also looking for a member who can be an effective team player and possess a good communication skill. AlthoughRead MoreOral Health : An Integral Part Of The Us Healthcare System1708 Words   |  7 Pageshealthcare is an integral part of the US healthcare system and contributes to the fast growing US healthcare expenditure. Since ages, a pervasive trend of neglected oral health has been observed. Though the long overdue reform act, ACA, has addressed the pitfalls of the healthcare system by improving the access and coverage to oral health, this trend still prevails. Underutilization of oral policy is seen predominantly in the rural residing adults. Almost 20% of US population resides in the rural regionRead MoreDiversity1392 Words   |  6 Pagesbecome increasingly harder for large companies, especially healthcare organizations. Despite the clear value of having a diverse workforce, like increased innovation, improved financial stats, and enhanced organizational performance, reputation, healthcare organizations are still struggling to attract and retain minorities and women in leadership roles. A multicultural workforce is always a good source of various insights and perspectives needed to continually adapt to today’s ever-changing healthcareRead MoreChange and Culture Case Study 11646 Words   |  7 PagesSlatton Change and Culture Case Study I The job of a middle manager is not easy, especially during times of extreme change. It requires balancing and maintaining varying personnel within the organization including upper management and a subordinate workforce. An option for many who successfully have not influenced the direction of an organization is to leave the company. However, according to Covey (2004), â€Å"A more common but insidious alternative is to remain and become a mindless conveyor of decisionsRead MoreHr Planning, Recruiting, Interviewing, Selecting And Hiring Process1569 Words   |  7 PagesOverview of HR Planning, Recruiting, Interviewing, Selecting and Hiring Process When a company hires new employees, they need to ensure that the right candidates are hired and this responsibility lies with the human resource management team. The first step is to develop a recruitment plan. A recruitment plan is documented and approved by the organization which maps out a clear strategy for attracting as well as hiring the qualified candidates. It also helps in ensuring an applicant pool consistingRead MoreA Brief Note On Robert Wood Johnson Foundation1235 Words   |  5 Pagesis solemnly the maker of health products. The foundation offers grants for health issues ranging from helping reduce child obesity to training heath care workers that is doctors and nurses. Institute of Medicine newly known as National Academy of Medicine is a non-profit and non-governmental organization that has been providing advice on health, medicine and biomedical science. It serves as a prominent and respectable national adviser on the improvement of health issues that are unbiasedRead MoreThe Stress Of Nurse Patient Ratio Per Specialty903 Words   |  4 PagesIntroduction With the ever changing environment in the health care system, support is needed from staff nurses and managers alike to prevent burnout. This should be completed by setting limits on hours of work, the use of humor in the workplace, allowing employees to take off work, and providing the resources for spiritual and emotional comfort and support. The Setting The focus of this paper centers on the stress and burnout in relation to patient safety amongst sub-acute nurses in my workplaceRead MoreRisk Management in Johns Hopkins Hospital1498 Words   |  6 Pagesmanagerial process of plummeting unreasonable and unplanned losses that ultimately affect an organization. To many it is also referred to as a loss exposure handling mode of management. In many organizations especially health facilities such as Johns Hopkins Hospital, losses mainly attributes to a financial crisis and require proper risk management methodologies. There are a lot of risks pertained to many day to day activities, ranging from surgeries to the actions of the health workforce and the subordinate

Sunday, December 22, 2019

Ancient Egypt Unique And Defining Burial Practices

Ancient Egypt is memorialized for its opulent history and culture along with the unique and defining burial practices. Ancient Egyptian religion was a very intricate yet complex way of belief. Egyptian religion was based on the worship and fellowship of many God’s who were believed to have a constant and ever being control of all earthly elements. The legends of these gods were to foretell and explain the influences of the forces they represented. The actual practice of Egyptian religion was an effort by both Pharaoh and nobles to provide both offerings and rule for the gods and gain their favor in hopes that their soul would live on in the afterlife. A piece of Egyptian religion are the Ancient Pyramids, these tombs were just the†¦show more content†¦It wasn’t until the Kings of Egypt, also referred to as the Pharaoh became the center of the religion for the Egyptians. Even though the Pharaoh was human, he was thought to be the direct descended from the gods. Due to this belief the Pharaoh soon began to believe that he deserved to have a burial to show such stature. The Egyptian government and society put forth large amounts of resources to fund these burial rituals and to the construct the Pyramids. Which brings into context the ancient belief in the afterlife. Ancient Egyptian society did all they could to safeguard that their souls would survive on after death, while society today, depending on the religion feel there is life after death however that life is attainable upon dying. One way that they believed made way for the soul to enter into the afterworld was by providing lavished tombs. (Stewart, Harry M.) These tombs were filled with not only delectable foods and drink but also offerings to said gods in trade to maintain the bodies and spirits of the deceased. The Egyptians rituals for the care of their dead were very detailed. Egyptians believed that humans possessed a ka, or what we refer to as our soul, which would leave the body at the moment of death. The ka, during life was believed to have obtained its nourishment from the food and drink the human would take in. So, the assumption was, after death the ka must continue receiving this nourishment

Saturday, December 14, 2019

Digital Cinema Free Essays

Scott McQuire Millennial fantasies As anyone interested in film culture knows, the last decade has witnessed an explosion of pronouncements concerning the future of cinema. Many are fuelled by naked technological determinism, resulting in apocalyptic scenarios in which cinema either undergoes digital rebirth to emerge more powerful than ever in the new millennium, or is marginalised by a range of ‘new media’ which inevitably include some kind of broadband digital pipe capable of delivering full screen ‘cinema quality’ pictures on demand to home consumers.The fact that the doubleedged possibility of digital renaissance or death by bytes has coincided with celebrations of the ‘centenary of cinema’ has undoubtedly accentuated desire to reflect more broadly on the history of cinema as a social and cultural institution. We will write a custom essay sample on Digital Cinema or any similar topic only for you Order Now It has also intersected with a significant transformation of film history, in which the centrality of ‘narrative’ as the primary category for uniting accounts of the technological, the economic and the aesthetic in film theory, has become subject to new questions.Writing in 1986 Thomas Elsaesser joined the revisionist project concerning ‘early cinema’ to cinema’s potential demise: ‘A new interest in its beginnings is justified by the very fact that we might be witnessing the end: movies on the big screen could soon be the exception rather than the rule’. 1 Of course, Elsaesser’s speculation, which was largely driven by the deregulation of television broadcasting in Europe in conjunction with the emergence of new technologies such as video, cable and satellite in the 1980s, has been contradicted by the decade long cinema boom in the multiplexed 1990s. It has also been challenged from another direction, as the giant screen ‘ex perience’ of large format cinema has been rather unexpectedly transformed from a bit player into a prospective force. However, in the same article, Elsaesser raised another issue which has continued to resonate in subsequent debates: Scott McQuire, ‘Impact Aesthetics: Back to the Future in Digital Cinema? ‘, Convergence: The Journal of Research into New Media Technologies, vol. 6, no. 2, 2000, pp. 41-61.  © Scott McQuire. All rights reserved.Deposited to the University of Melbourne ePrints Repository with permission of Sage Publications . 2 Few histories fully address the question of why narrative became the driving force of cinema and whether this may itself be subject to change. Today the success, of SF as a genre, or of directors like Steven Spielberg whose narratives are simply anthology pieces from basic movie plots, suggest that narrative has to some extent been an excuse for the pyrotechnics of IL;M. 3 Concern for the demise, if not of cinema per se, then of narrative in cinema, is widespread in the present.In the recent special ‘digital technology’ issue of Screen, Sean Cubitt noted a ‘common intuition among reviewers, critics and scholars that something has changed in the nature of cinema — something to do with the decay of familiar narrative and performance values in favour of the qualities of the blockbuster’. 4 Lev Manovich has aligned the predominance of ‘blockbusters’ with ‘digital cinema’ by defining the latter almost entirely in terms of increased visual special effects: ‘A visible sign of this shift is the new role which computer generated special effects have come to play in th e Hollywood industry in the last few years.Many recent blockbusters have been driven by special effects; feeding on their popularity’. 5 In his analysis of Hollywood’s often anxious depiction of cyberspace in films such as The Lawn Mower Man (1992), Paul Young argues that ‘cyberphobic films overstress the power of the visual in their reliance on digital technology to produce spectacle at the expense of narrative’, and adds this is ‘a consequence that [Scott] Bukatman has argued is latent in all special effects’. A more extreme (but nevertheless common) view is expressed by film maker Jean Douchet: ‘[Today] cinema has given up the purpose and the thinking behind individual shots [and narrative], in favour of images — rootless, textureless images — designed to violently impress by constantly inflating their spectacular qualities’. 7 ‘Spectacle’, it seems, is winning the war against ‘narrative’ all along the line.Even a brief statistical analysis reveals that ‘special effects’ driven films have enjoyed enormous recent success, garnering an average of over 60% of the global revenue taken by the top 10 films from 1995-1998, compared to an average of 30% over the previous four years. 8 Given that the proportion of box office revenue taken by the top 10 films has held steady or increased slightly in the context of a rapidly expanding total market, this indicates that a handful of special-effects films are generating huge revenues each year.While such figures don’t offer a total picture of the film industry, let alone reveal which films which will exert lasting cultural influence, they do offer a snapshot of contemporary cultural taste refracted through studio marketing budgets. Coupled to the recent popularity of paracinematic forms, such as large format and special ve nue films, the renewed emphasis on ‘spectacle’ over ‘narrative’ suggests another possible end-game for 3 inema: not the frequently prophesied emptying of theatres made redundant by the explosion of home-based viewing (television, video, the internet), but a transformation from within which produces a cinema no longer resembling its (narrative) self, but something quite other. Complementing these debates over possible cinematic futures is the fact that any turn to spectacular film ‘rides’ can also be conceived as a return — whether renaissance or regression is less clear — to an earlier paradigm of film-making famously dubbed the ‘cinema of attraction’ by Tom Gunning.Gunning long ago signalled this sense of return when he commented: ‘Clearly in some sense recent spectacle cinema has re-affirmed its roots in stimulus and carnival rides, in what might be called the Spielberg-Lucas-Coppola cinema of effects’. 9 For Paul Arthur, developments in the 1990s underline the point: The advent of Imax 3-D and its future prospects, in tandem with the broader strains of a New Sensationalism, provide an occasion to draw some connections with the early history of cinema and the recurrent dialectic between the primacy of the visual and, for lack of a better term, the sensory. 0 In what follows here, I want to further consider the loops and twists of these debates, not so much with the grand ambition of resolving them, but firstly of adding some different voices to the discussion — particularly the voices of those involved in film production. 11 My intention is not to elevate empiricism over theory, but to promote dialogue between different domains of film culture which meet all too rarely, and, in the process, to question the rather narrow terms in which ‘digital cinema’ has frequently entered recent theoretical debates.Secondly, I want to consider the relation between ‘narra tive’ and ‘spectacle’ as it is manifested in these debates. My concern is that there seems to be a danger of confusing a number of different trajectories — such as cinema’s on-going efforts to demarcate its ‘experience’ from that of domestic entertainment technologies, and the turn to blockbuster exploitation strategies —and conflating them under the heading of ‘digital cinema’.While digital technology certainly intersects with, and significantly overlaps these developments, it is by no means co-extensive with them. ‘Spectacular sounds’: cinema in the digital domain Putting aside the inevitable hype about the metamorphosis of Hollywood into ‘Cyberwood’, like many others I am convinced that digital technology constitutes a profound revolution in cinema, primarily because of its capacity to cut across all 4 sectors of the industry simultaneously, affecting film production, narrative convention s and audience experience. In this respect, the only adequate point of reference for the depth and extent of current changes are the transformations which took place with the introduction of synchronised sound in the 1920s. However, while the fundamental level at which change is occurring is widely recognised, it has been discussed primarily in terms of the impact of CGI (computer-generated imaging) on the film image. A more production-oriented approach would most likely begin elsewhere; with what Philip Brophy has argued is among ‘the most overlooked aspects of film theory and criticism (both modern and postmodern strands)’ — sound. 2 A brief flick through recent articles on digital cinema confirms this neglect: Manovich locates ‘digital cinema’ solely in a historical lineage of moving pictures; none of the articles in the recent Screen dossier mention sound, and even Eric Faden’s ‘Assimilating New Technologies: Early Cinema, Sound and Computer Imaging’ onl y uses the introduction of synchronised sound as an historical analogy for discussing the contemporary effect of CGI on the film image13. While not entirely unexpected, this silence is still somewhat urprising, given the fact that digital sound technology was adopted by the film industry far earlier and more comprehensively than was CGI. And, at least until the early 1990s with films like Terminator 2 (1991) and Jurassic Park (1993), the effect on audience experience was arguably far greater than was digital imaging. Dominic Case [Group Services and Technology Manager at leading Australian film processor Atlab] argued in 1997: I am more and more convinced that the big story about film technology as far as audiences are concerned in the past few years has been sound. Because, although you can do fancy digital things, the image remains glued to that bit of screen in front of your eyes, and it’s not really any bigger†¦ But the sound has gone from one woolly sound coming from the back of the screen with virtually no frequency range or dynamic range whatsoever †¦ to something that fills the theatre in every direction with infinitely more dynamic range and frequency range. To me, that’s an explosion in experience compared to what you are seeing on the screen.However, the visual bias of most film theory is so pervasive that this transformation often passes unremarked. Part of the problem is that we lack the necessary conceptual armature: there are no linkages which pull terms such as 5 ‘aural’ or ‘listener’ into the sort of semantic chain joining spectacle and spectator to the adjective ‘spectacular’. Film sound-mixer Ian McLoughlin notes: Generally speaking, most people are visually trained from birth. †¦ Very few people are trained to have a aural language and, as a result there isn’t much discussion about the philosophy of the sound track. .. There has been very, very little research done into the psycho-acoustic effects of sound and the way sound works sociologically on the audience. 14 Compounding this absence is the fact that the digital revolution in sound is, in many respects, the practical realisation of changes initiated with the introduction of Dolby Stereo in 1975. (On the other hand, the fact that CGI entered a special effects terrain already substantially altered by techniques of motion control, robotics and animatronics didn’t prevent critical attention to it. Four-track Dolby stereo led to a new era of sound experimentation beginning with films such as Star Wars (1977) and Close Encounters of the Third Kind (1977). As renowned sound mixer Roger Savage (whose credits include Return of the Jedi, 1983; Shine, 1996; and Romeo + Juliet, 1996) recalls: ‘Prior to that, film sound hadn’t changed for probably 30 years. It was Mono Academy †¦ Star Wars was one of the first films that I can remember where people started coming out of the theatre talking about the sound track’. 5 While narrative sound effects such as dialogue and music were still generally concentrated in the front speakers, the surround sound speakers became the vehicles for a new range of ‘spectacular’ sound effects. In particular, greater emphasis was given to boosting low frequency response, explicitly mirroring the amplified ambience of rock music. There was also greater attention given to the ‘spatialisation’ of discrete sound elements within the theatre.As Rich Altman has argued, these developments presented a significant challenge to one of the fundamental precepts of classical Hollywood narrative: the unity of sound and image and the subservience of sound effects to narrative logic: Whereas Thirties film practice fostered unconscious visual and psychological spectator identification with characters who appear as a perfect amalgam of image and sound, the Eighties ushered in a new kind of visceral identification, dependent on the sound system’s overt ability, through bone-rattling bass and unexpected surround effects, to cause spectators to vibrate — quite literally — with the entire narrative space.It is thus no longer the eyes, the ears and the brain that alone initiate identification and maintain contact with a sonic 6 source; instead, it is the whole body that establishes a relationship, marching to the beat of a different woofer. Where sound was once hidden behind the image in order to allow more complete identification with the image, now the sound source is flaunted, fostering a separate sonic identification contesting the limited rational draw of the image and its characters. 16 Altman’s observation is significant in this context, inasmuch as it suggests that the dethroning of a certain model of narrative cinema had begun prior to the digital threshold, and well before the widespread use of CGI.It also indicates the frontline role that sound took in the film industry’s initial response to the incursions of video : in the 1980s the new sound of cinema was a primary point of differentiation from domestic image technologies. However, while Dolby certainly created a new potential for dramatic sound effects, in practice most film makers remained limited by a combination of logistical and economic constraints. In this respect, the transition to digital sound has been critical in creating greater latitude for experimentation within existing budget parameters and production time frames. In terms of sound production, Roger Savage argues: ‘The main advantages in digital are the quality control, the speed and the flexibility’. This is a theme which is repeated with regard to the computerisation of other areas of film making such as picture editing and CGI. ) Enhanced speed, flexibility and control stem from a reduction in the need for physical handling and a refinement of precision in locating and manipulating individual elements. In sound production, libraries of analogue tape reels each holding ten minutes of sound have given way to far more compact DAT tapes and hard drive storage. The entire production process can now often be realised on a single digital workstation. There is no need for a separate transfer bay, and, since digital processing involves the manipulation of elect ronic data, there is no risk of degrading or destroying original recordings by repeated processing. Once the sounds are catalogued, digital workstations grant random access in a fraction of a second (eliminating tape winding time), and, unlike sprocket-based sound editing, all the tracks which have been laid can be heard immediately in playback. The creative pay-off is an enhanced ability to add complexity and texture to soundtracks. In terms of sound reproduction, the most marked change resulting from six track digital theatre systems is improved stereo separation and frequency response which assists better music reproduction in theatres — a change which goes hand in glove with the increased prominence that music and soundtracks have assumed in promoting and marketing films in recent years. 7The enhanced role of sound in cinema is even more marked for large format films which, because of their high level of visual detail, demand a correspondingly high level of audio detail. Ian McLoughlin (who, amongst many other things, shares sound mixing credits with Savage for the large -format films Africa’s Elephant Kingdom, 1998 and The Story of a Sydney, 1999) comments: If you look at the two extremes of image technology, if you look at television, and then you look at something like Imax, the most interesting difference is the density of the sound track that is required with the size of the picture. When you’re doing a TV mix, you try to be simple, bold. You can’t get much in or otherwise it just becomes a mess. With 35mm feature films you’re putting in 10, 20 times more density and depth into the sound track as compared to television, and †¦ when you go to Imax, you need even more. McLoughlin also makes a significant point concerning the use (or abuse) of digital sound: When digital first came out and people found that they could make a enormously loud sound tracks, everyone wanted enormously large sound tracks. .. . Unfortunately some people who present films decided that the alignment techniques that companies like Dolby and THX have worked out aren’t to their liking and they think audiences like a lot of sub-base and so they sometimes wind that up. †¦ [S]uddenly you’ve got audiences with chest cavities being punched due to the amount of bottom end. . ..Dolby and screen producers and screen distributors in America have actually been doing a lot of research into what they are calling the ‘annoyance factor’ of loud sound tracks. Because audiences are getting turned off by overly jarring, overly sharp, soundtracks. This comment is worth keeping in mind for two reasons. Firstly, it underlines the fact that the image is by no means the only vehicle for producing cinematic affect: in this sense, ‘impact aesthetics’ offers a more apt description of the trajectory of contemporary cinema than ‘spectacle’. Secondly, it warns against making hasty generalisations when assessing the long-term implications of CGI. While digital imaging undoubtedly represents a significant paradigm shift in cinema, it is also feasible that the 1990s will eventually be seen more as a teething period of ‘gee whizz’ experimentation with the new digital toolbox, which was gradually turned towards other (even more ‘narrative’) ends. (The way we now look at early sound films is instructive: while contemporary audiences were fascinated by the mere 8 fact that pictures could ‘talk’, in retrospect we tend to give more weight to the way sound imposed new restrictions on camera movement, location shooting and acting style). Painting with light In contrast to the relative dearth of attention given to changes in areas such as sound and picture editing, digital manipulation of the film image has received massive publicity.While this is partly the result of deliberate studio promotion, it also reflects the profound changes in cinematic experience that computers have set in train. When we can see Sam Neil running from a herd of dinosaurs — in other words, when we see cinematic images offering realistic depictions of things we know don’t exist — it is evident that the whole notion of photo-realism which has long been a central plank of cinematic credibility is changing. But how should this change be understood? Is it simply that ‘live action’ footage can now be ‘supplemented’ with CG elements which replace earlier illusionistic techniques such as optical printing, but leave cinema’s unique identity as an ‘art of recording’ intact? Or is a new paradigm emerging in which cinema becomes more like painting or animation?Lev Manovich has recently taken the latter position to an extreme, arguing that, ‘Digital cinema is a particular case of animation which uses live-action footage as one of its many elements’, and concluding: ‘In retrospect, we can see that twentieth century cinema’ s regime of visual realism, the result of automatically recording visual reality, was only an exception, an isolated accident in the history of visual representation.. . ’. 17 While I suspect that Manovich significantly underestimates the peculiar attractions of ‘automatic recording’ (which produced what Walter Benjamin termed the photograph’s irreducible ‘spark of contingency’, what Barthes ontologised as the hotographic punctum), it is clear the referential bond linking camera image to physical object has come under potentially terminal pressure in the digital era. However, any consideration of ‘realism’ in cinema is immediately complicated by the primacy of fictional narrative as the dominant form of film production and consumption. Moreover, cinema swiftly moved from adherence to the ideal of direct correspondence between image and object which lay at the heart of classical claims to photographic referentiality. ‘Cheating’ with the order of events, or the times, locations and settings in which they occur, is second nature to film-makers. By the time cinema ‘came of age†™ in the picture palace of the 1920s, a new logic of montage, shot matching and continuity had coalesced into the paradigm of 9 classical narrative’, and cinematic credibility belonged more to the movement of the text rather than the photographic moment — a shift Jean-Louis Commolli has neatly described in terms of a journey from purely optical to psychological realism. 18 Within this paradigm all imaginable tactics were permissible in order to imbue pro-filmic action with the stamp of cinematic authority — theatrical techniques such as performance, make-up, costumes, lighting and set design were augmented by specifically cinematic techniques such as stop motion photography and rear projection, as well as model-making and matte painting which entered the screen world via the optical printer.Given this long history of simulation, the digital threshold is perhaps best located in terms of its effect on what Stephen Prince has dubbed ‘perceptual realism’, rather than in relation to an abstract category of ‘realism’ in general. Prince argues: A perceptually realistic image is one which structurally corresponds to the viewer’s audio-visual experience of three-dimensional space †¦ Such images display a nested hierarchy of cues which organise the display of light, colour, texture, movement and sound in ways that correspond to the viewer’s own understanding of these phenomena in daily life. Perceptual realism, therefore, designates a relationship between the image on film and the spectator, and it can encompass both unreal images and those which are referentially realistic. Because of this, unreal images may be referentially fictional but perceptually realistic. 19I have emphasised Prince’s evocation of fidelity to ‘audio-visual experience’ because it underlines the extent to which the aim of most computer artists working in contemporary cinema is not simply to create high resolution images, but to make these images look as if they might have been filmed. This includes adding various ‘defects’, such as film grain, lens flare, motion blur and edge halation. CG effects guru Scott Billups argues that film makers had to ‘educate’ computer programmers to achieve this end: For years we were saying: ‘Guys, you look out on the horizon and things get grayer and less crisp as they get farther away’. But those were the types of naturally occurring event structures that never got written into computer programs.They’d say ‘Why do you want to reduce the resolution? Why do you want to blur it? â €™. 20 10 By the 1990s many software programs had addressed this issue. As Peter Webb (one of the developers of Flame) notes: Flame has a lot of tools that introduce the flaws that one is trained to see. Even though we don’t notice them, there is lens flare and motion blur, and the depth of field things, and, if you don’t see them, you begin to get suspicious about a shot. 21 In other words, because of the extent to which audiences have internalised the camera’s qualities as the hallmark of credibility, contemporary cinema no longer aims to mime ‘reality’, but ‘camera-reality’.Recognising this shift underlines the heightened ambivalence of realism in the digital domain. The film maker’s ability to take the image apart at ever more minute levels is counterpointed by the spectator’s desire to comprehend the resulting image as ‘realistic’ — or, at least, equivalent to other cine-images. In some respects, this can be compared to the dialectic underlying the development of monta ge earlier this century, as a more ‘abstract’ relation to individual shots became the basis for their reconstitution as an ‘organic’ text. But instead of the fragmentation and re-assemblage of the image track over time, which founded the development of lassical narrative cinema and its core ‘grammatical’ structures such as shot/reverse shot editing, digital technology introduces a new type of montage: montage within the frame whose prototype is the real time mutation of morphing. However, while ‘perceptual realism’ was achieved relatively painlessly in digital sound, the digital image proved far more laborious. Even limited attempts to marry live action with CGI, such as TRON (1982) and The Last Starfighter (1984) proved unable to sustain the first wave of enthusiasm for the computer. As one analyst observed: ‘The problem was that digital technology was both comparatively slow and prohibitively expensive. In fact, workstations capable of performing at film resolution were driven by Cray super-computers’. 2 It is these practical exigencies, coupled to the aesthetic disjunction separating software programmers from film makers I noted above, rather than a deeply felt desire to manufacture a specifically electronic aesthetic, which seems to underlie the ‘look’ of early CGI. 23 Exponential increases in computing speed, coupled to decreases in computing cost, not only launched the desktop PC revolution in the mid-1980s, but m ade CGI in film an entirely different matter. The second wave of CGI was signalled when Terminator 2: Judgement Day (1991) made morphing a household word. 24 Two 11 years later the runaway box-office success of Jurassic Park (1993) changed the question from whether computers could be effectively used in film making to how soon this would happen. The subsequent rash of CGI-driven blockbusters, topped by the billion dollar plus gross of Cameron’s Titanic (1997), has confirmed the trajectory.Cameron is one of many influential players who argue that cinema is currently undergoing a fundamental transformation: ‘We’re on the threshold of a moment in cinematic history that is unparalleled. Anything you imagine can be done. If you can draw it, if you can describe it, we can do it. It’s just a matter of cost’. 25 While this claim is true at one level — many tricky tasks such as depicting skin, hair and water, or integrating CGI elements into live action images shot with a hand-held camera, have now been accomplished successfully — it is worth remembering that ‘realism’ is a notorious ly slippery goal, whether achieved via crayon, camera or computer. Dennis Muren’s comments on his path-breaking effects for Jurassic Park (which in fact had only 5 to 6 minutes of CGI and relied heavily on models and miniatures, as did more recent ‘state of the art’ blockbusters such as The Fifth Element, 1997 and Dark City, 1998) bear repeating: ‘Maybe we’ll look back in 10 years and notice that we left things out that we didn’t know needed to be there until we developed the next version of this technology’. Muren adds: In the Star Wars films you saw lots of X-wings fighters blow up, but these were always little models shot with high-speed cameras. You’ve never seen a real X-wing blow up, but by using CGI, you might just suddenly see what looks like a full-sized X-wing explode. It would be all fake of course, but you’d see the structure inside tearing apart, the physics of this piece blowing off that piece. Then you might look back at Star Wars and say, ‘That looks terrible’. 26Clearly, George Lucas shared this sentiment, acknowledging in 1997 that ‘I’m still bugged by things I couldn’t do or couldn’t get right, and now I can fix them’. 27 The massive returns generated by the ‘digitally enhanced’ Star Wars trilogy raises the prospect of a future in which blockbuster movies are not re-made with new casts, but perpetually updated with new generations of special effects. Stop the sun, I want to get off Putting aside the still looming question of digital projection, the bottom line in the contemporary use of digital technology in cinema is undoubtedly ‘control’: 12 particularly the increased control that film makers have over all the different components of image and sound tracks.Depending on a film’s budget, the story no longer has to work around scenes which might be hard to set up physically or reproduce photo-optically— they are all grist to the legions of screen jockeys working in digital post-production houses. George Lucas extols the new technology for enhancing the ability to realise directorial vision: I think cinematographers would love to have ultimate control over the lighting; they’d like to be able to say, ‘OK, I want the sun to stop there on the horizon and stay there for about six hours, and I want all of those clouds to go away. Everybody wants that kind of control over the image and the storyt elling process. Digital technology is just the ultimate version of that. 28A direct result of digital imaging and compositing techniques has been an explosion of films which, instead of ‘fudging’ the impossible, revel in the capacity to depict it with gripping ‘realism’: Tom Cruise’s face can be ripped apart in real time (Interview with the Vampire, 1994), the Whitehouse can be incinerated by a fireball from above (Independence Day, 1996), New York can be drowned by a tidal wave, or smashed by a giant lizard(Deep Impact, Godzilla, 1998). But, despite Lucas’ enthusiasm, many are dubious about where the new primacy of special effects leaves narrative in cinema. The argument put forward by those such as Sean Cubitt and Scott Bukatman is that contemporary special effects tend to displace narrative insofar as they introduce a disjunctive temporality evocative of the sublime.Focusing on Doug Trumbull’s work, Bukatman emphasises the contemplative relationship established between spectator and screen in key effects scenes (a relationship frequently mirrored by on-screen characters displaying their awe at what they– and ‘we’ – are seeing. )29 Cubitt suggests that similar ‘fetishistic’ moments occur in songs such as Diamonds are a Girl’s Best Friend, where narrative progress gives way to visual fascination. His example is drawn from a strikingly similar terrain to that which inspired Laura Mulvey’s well-known thesis on the tension between voyeurism and scopophilia in classical narrative cinema: Mainstream film neatly combined spectacle and narrative. (Note, however, in the musical song-a nd-dance numbers break the flow of the diegesis).The presence of woman is an indispensable element of spectacle in normal narrative film, yet her visual presence tends to work against the development of a story line, to freeze the flow of action in moments of erotic contemplation. 30 13 This connection was also made by Tom Gunning in his work on the early ‘cinema of attraction’: ‘As Laura Mulvey has shown in a very different context, the dialectic between spectacle and narrative has fueled much of the classical cinema’. 31 In this respect, a key point to draw from both Mulvey and Gunning is to recognise that they don’t conceive the relationship between spectacle and narrative in terms of opposition but dialectical tension. 32 This is something that other writers have sometimes forgotten.Presenting the issue in terms of an opposition (spectacle versus narrative) in fact recycles positions which have been consistently articulated (and regularly reversed) throughout the century. In the 1920s, avant-garde film makers railed against ‘narrative’ because it was associated primarily with literary and theatrical scenarios at the expense of cinematic qualities (Gunning begins his ‘Cinema of Attraction’ essay with just such a quote from Fernand Leger). Similar concerns emerged with debates in France over auteur theory in the 1950s, where the literary qualities of script were opposed to the ‘properly cinematic’ qualities of mise-en-scene.In the 1970s, the ‘re fusal of narrative’ which characterised much Screen theory of the period, took on radical political connotations. Perhaps as a reaction to the extremity of pronouncements by those such as Peter Gidal, there has been a widespread restoration of narrative qualities as a filmic ‘good object’ in the present. However, rather than attempting to resolve this split in favour of one side or the other, the more salient need is to examine their irreducible intertwining: what sort of stories are being told, and what sort of spectacles are being deployed in their telling? While it is easy to lament the quality of story-telling in contemporary blockbusters, few critics seriously maintain that such films are without narrative.A more productive framework is to analyse why explicitly ‘mythological’ films such as the Star Wars cycle have been able to grip popular imagination at this particular historical conjuncture, marrying the bare bones of fairy-tale narrative structures to the inculcation of a specific type of special effects driven viewing experience. (To some extent, ths is Bukatman’s approach in his analysis of special effects). In this context, it is also worth remembering that, despite the quite profound transformations set in train by the use of digital technology in fi lm making, there has thus far been little discernible effect on narrative in terms of structure or genre. The flirtation with ‘non-linear’ and ‘interactive’ films was a shooting star which came and went with the CD-ROM, while most contemporary blockbusters conform smoothly to established cine-genres (sci-fi, horror, disaster and action- 14 dventure predominating), with a significant number being direct re-makes of older films done ‘better’ in the digital domain. One of the more interesting observations about possible trends in the industry is put forward by James Cameron, who has argued that digital technology has the potential to free film makers from the constraints of the ‘A’ and ‘B’ picture hierarchy: [I]n the ’40s you either had a movie star or you had a B-movie. Now you can create an A-level movie with some kind of visual spectacle, where you cast good actors, but you don’t need an Arnold or a Sly o r a Bruce or a Kevin to make it a viable film. 33 However, Cameron himself throws doubt on the extent of this ‘liberation’ by underlining the industrial nature of digital film production. 4 In practice, any film with the budget to produce a large number of cutting edge special effects shots is inevitably sold around star participation, as well as spectacle (as were films such as The Robe, 1953 and Ben Hur, 1926). This point about the intertwining of narrative and spectacle is re-inforced if we look at developments in large-format film, an area frequently singled out for its over-dependence on screen spectacle to compensate for notoriously boring ‘educational’ narrative formats. Large-format (LF) cinema is currently in the throes of a significant transformation The number of screens worldwide has exploded in the last four years (between 1995 and January 1999, the global LF circuit grew from 165 to 263 theatres. By January 2001, another 101 theatres are due to open, taking the total to 364, an increase of 120% in 6 years).More significantly, the majority of new screens are being run by commercial operators rather than institutions such as science museums. These new exhibition opportunities, coupled to the box-office returns generated by films such as Everest (the 15th highest grossing film in the USA in 1998, despite appearing on only 32 screens) has created significant momentum in the sector for the production of LF films capable of attracting broader audiences. For some producers, this means attempting to transfer the narrative devices of dramatic feature films onto the giant screen, while others argue that the peculiarities of the medium means that LF needs to stick with its proven documentary subjects.However, most significantly in this context, none dispute the need for the sector to develop better narrative techniques if it is to grow and prosper, particularly by 15 attracting ‘repeat’ audiences. In many respects, the LF sector is currently in a similar position to cinema in the 1900s, with people going to see the apparatus rather than a specific film, and the ‘experience’ being advertised largely on this basis. While it would be simplistic to see current attempts to improve the narrative credentials of LF films as a faithful repetition of the path that 35mm cinema took earlier this century, since most production is likely to remain documentary-oriented, it would be equally as foolish to ignore the cultural an d commercial imperatives which still converge around telling a ‘good story’. 5 Distraction and the politics of spectacle Despite the current rash of digitally-inspired predictions, narrative in film is unlikely to succumb to technological obsolescence. But nor will spectacle be vanquished by a miraculous resurgence of ‘quality’ stories. A corollary of a dialectical conception of the interrelationship between narrative and spectacle is that neither should be seen simply as ‘good’ or ‘bad’ objects in themselves. For Mulvey, spectacle (exemplified by close-ups which turn woman’s face and body into a fetish), as well as the more voyeuristic strategy of narrative, were both attuned to the anxious imagination of patriarchal culture in classical cinema.Both were techniques for negotiating the threat of castration raised by the image of woman, an image classical cinema simultaneously desired and sought to circumscribe or punish. Nevertheless, even within this heavily constrained context, ‘spectacle’ could also assume a radical function by ‘interrupting’ the smooth functioning of narrative, disturbing the rules of identification and the systematic organisation of the look within the text. (This is the gist of her comparison between the films of von Sternberg, which privilege a fetish image of Dietrich over narrative progress, and those of Hitchcock which more closely align the viewer with the male protagonist). Can spectacle still exert a ‘progressive’ function in contemporary cinema?While most critics answer this question negatively without even posing it, Paul Young is unusual in granting a measure of radical effect to the renewed primacy of spectacle. Young draws on Miriam Hansen’s account of the ‘productive ambiguity’ of early cinema, in which the lack of standardised modes of exhibition, coupled to reliance on individual attractions, gave audiences a relative freedom to interpret what they saw, and established cinema as (potentially) an alternative public sphere. He takes this as support for his argument that contemporary ‘spectacle’ cinema constitutes an emergent challenge to ‘Hollywood’s institutional identity’. 36 16 Young’s analysis contrasts markedly with Gunning’s earlier description of the ‘cinema of effects’ as ‘tamed attractions’. 7 Nevertheless both share some common ground: Young’s reference to the ‘productive ambiguity’ of early cinema, like Gunning’s rather oblique and undeveloped reference to the ‘primal power’ of attraction, draws nourishment from Siegfried Kracauer’s early writings on the concept of distraction. In the 1920s, Kracauer set up ‘distraction’ as a counterpoint to contemplation as a privileged mode of audience reception, seeing it as embodying a challenge to bourgeois taste for literary-theatrical narrative forms, and also as the most compelling mode of presentation to the cinema audience of their own disjointed and fragmented conditions of existence. 38 While distraction persisted as a category used by Walter Benjamin in his ‘Artwork’ essay of the mid19 30s, by the 1940s Kracauer seemed to have revised his position.As Elsaesser has pointed out, this re-appraisal was at least partly a re-assessment of the ‘productive ambiguity’ which had characterised social spaces such as cinema; by the 1940s distraction and spectacle had been consolidated into socially dominant forms epitomised by Hollywood on the one hand and fascism on the other. 39 If Kracauer’s faith that the 1920s audience could symptomatically encounter ‘its own reality’ via the superficial glamour of movie stars rather than the putative substance of the era’s ‘high culture’ was already shaken by the 1940s, what would he make of the post-pop art, postmodern 1990s? The extent to which surface elements of popular culture have been esthetically ‘legitimated’ without any significant transformation of corresponding political and economic values suggests the enormous difficulties facing those trying to utilise spect acle as a ‘progressive’ element in contemporary culture. However, it is equally important to acknowledge that this problem cannot be resolved simply by appealing to ‘narrative’ as an antidote. While the terms remain so monolithic, the debate will not progress beyond generalities. In this respect, Kracauer’s work still offers some important lessons to consider in the present. Here, by way of conclusion, I want to sketch out a few possible lines of inquiry. On the one hand, his concept of the ‘mass ornament’ indicates that any turn, or return, to spectacle in cinema needs to be situated in a wider social context. 0 Spectacle is not simply a matter of screen image, but constitutes a social relation indexed by the screen (something Guy Debord underlined in the 1960s). Developments in contemporary cinema need to be related to a number of other trajectories, including cinema’s on-going endeavours to distinguish its ‘experience’ 17 from that of home entertainment, as well as the proliferation of spectacle in social arenas as diverse as sport (the Olympic games), politics (the dominance of the cult of personality in all political systems) a nd war (the proto-typical ‘media-event’). On the other hand, the specific forms of spectacle mobilised in contemporary cinema need to be examined for the extent to which they might reveal (in Kracauer’s terms) the ‘underlying meaning of existing conditions’. Kracauer’s analysis of cinema in the 1920s situated the popularity of a certain structure of viewing experience in relation to the rise of a new class (the white collar worker). In contemporary terms, I would argue that the relevant transformation is the process of ‘globalisation’. While this is a complex, heterogeneous and uneven phenomenon, a relevant aspect to consider here is Hollywood’s increasing reliance on overseas markets, both for revenue, and, more importantly, for growth. 41 In this context, the growing imperative for films to ‘translate’ easily to all corners and cultures of the world is answered by building films around spectacular action setpieces. Equally as ignificantly, the predominant themes of recent special effects cinema— the destruction of the city and the mutation or dismemberment of the human body — are symptomatic of the underlying tensions of globalisation, tensions exemplified by widespread ambivalence towards the socio-political effects of speed and the new spatio-temporal matrices such as cyberspace. 42 The most important cinematic manifestations of these anxious fascinations are not realised at the level of narrative ‘content’ (although they occasionally make themselves felt there), but appear symptomatically in the structure of contemporary viewing experience. The repetition of awe and astonishment repeatedly evoked by ‘impossible’ images as the currency of today’s ‘cutting edge’ cinema undoubtedly functions to prepare us for the uncertain pleasures of living in a world we suspect we will soon no longer recognise: it is not simply ‘realism’ but ‘reality’ which is mutating in the era of digital economy and the Human Genome Project. If this turn to spectacle is, in some respects, comparable to the role played by early cinema in negotiating the new social spaces which emerged in the industrial city remade by factories and department stores, electrification and dynamic vehicles, it also underscores the fact that the ‘death’ of camera realism in the late twentieth century is a complex psycho-social process, not least because photo-realism was always less an aesthetic function than a deeply embedded social and political relation. 43 18 Finally, I would argue that it is important not to subsume all these filmic headings under the single rubric of ‘digital’. There is a need to acknowledge, firstly, that digital technology is used far more widely in the film industry than for the production of blockbusters and special effects (for example, it is the new industry standard in areas such as sound production and picture editing).Moreover, as Elsaesser has argued recently, technology is not the dri ving force: ‘In each case, digitisation is ‘somewhere’, but it is not what regulates the system, whose logic is commercial, entrepreneurial and capitalist-industrialist’44 What the digital threshold has enabled is the realignment of cinema in conformity with new demands, such as ‘blockbuster’ marketing blitzes constructed around a few spectacular image sequences of the kind that propelled Independence Day to an US$800m gross. It has rejuvenated cinema’s capacity to set aesthetic agendas, and, at the same time, restored its status as a key player in contemporary political economy. In this context, one aspect of the digital threshold deserves further attention. In the 1990s, product merchandising has become an increasingly important part of financing the globalised film industry. While some would date this from Star Wars, Jurassic Park offers a more relevant point of reference: for the first time, audiences could see on screen, as an integral part of the filmic diegesis, the same commodities they could purchase in the cinema foyer. As Lucie Fjeldstad (then head of IBM’s multimedia division) remarked at the time (1993) : ‘Digital content is a return-on-assets goldmine, because once you create Terminator 3, the character, it can be used in movies, in theme-park rides, videogames, books, educational products’. 45 Digital convergence is enacted not simply in the journey from large screen to small screen: the same parameters used in designing CG characters for a film can easily be transmitted to off-shore factories manufacturing plastic toys. How to cite Digital Cinema, Papers

Friday, December 6, 2019

Rheumatoid Arthritis Essay Paper Example For Students

Rheumatoid Arthritis Essay Paper Arthritis is a general term for approximately 100 diseases that produce either INFLAMMATION of connective tissues, particularly in joints, or noninflammatory degeneration of these tissues. The word means joint inflammation, but because other structures are also affected, the diseases are often called connective tissue diseases. The terms rheumatism and rheumatic diseases are also used. Besides conditions so named, the diseases include gout, lupus erythematosus, ankylosing spondylitis, degenerative joint disease, and many others, among them the more recently identified LYME DISEASE. Causes of these disorders include immune-system reactions and the wear and tear of aging, while research indicates that the nervous system may often be equally involved. About one out of seven Americans exhibit some form of arthritis. INFLAMMATORY CONNECTIVE TISSUE DISEASES This varied group of diseases produces inflammation in the connective tissues, particularly in the joints. The signs of inflammationwarmth, redness, swelling, and painmay be apparent. Microscopic examination of the lesions reveals prominent blood vessels, abnormal accumulations of white blood cells, and varying degrees of wound healing with scarring. In some diseases, the inflammation is clearly an immune reaction, the bodys defense against invading microorganisms. In others, the cause is different or unknown. Infectious Arthritis This disease is most common in young adults. Infection in a joint is usually caused by bacteria or other microorganisms that invade the joint from its blood vessels. Within hours or a few days the joint, usually the knee or elbow, becomes inflamed. There is an abnormal accumulation of synovial, or joint, fluid, which may be cloudy and contain large numbers of white blood cells. Gonococcal arthritis, a complication of gonorrhea, is the most common form of infectious arthritis. Treatment with antibiotics and aspiration of synovial fluid is usually promptly effective, and only minor residual damage is done to the joint. Occasionally the infection is prolonged and produces joint destruction and requires surgery. Rheumatic Fever This is a form of infectious arthritis caused by hemolytic streptococcus, a bacterium. Unlike typical infectious arthritis, however, the disease is most common in children aged 5 to 15 years, begins weeks after the onset of the streptococcal infection, and streptococci cannot be isolated from the joint fluid. The inflammatory process may involve the heart and produce rheumatic heart disease. The symptoms of RHEUMATIC FEVER usually occur 2 to 3 weeks after the onset of a severe streptococcal sore throat. Acute pain and swelling migrate from joint to joint over a period of several days. The inflammation, which persists for less than three months, can usually be controlled by aspirin and rest, and it produces no residual deformity. Less than 1 percent of children with streptococcal sore throats develop rheumatic fever, and a small number of these will develop rheumatic heart disease. Rheumatic fever only rarely occurs if the streptococcal sore throat is treated early with an antibiotic such as penicillin. The inflammation of the joints and the heart in rheumatic fever apparently occurs because the bodys immune response to the streptococcus damages tissues. For this reason, rheumatic fever has been termed an autoimmune disease. Gout and Pseudogout The inflammatory process in these diseases is unrelated to infection. Rather, inflammation is incited by the deposition in the joint of uric acid present in the bloodstream. An attack of acute gouty arthritis is caused by the formation of needlelike crystals of the deposited uric acid. When these crystals are ingested by white blood cells, the cells release enzymes that evoke inflammations. Uric acid is a normal breakdown product of urine metabolism. Abnormally elevated blood levels of uric acid, which are associated with gouty arthritis, arise through either excessive production of uric acid or decreased excretion of uric acid by the kidneys. Some cases of hyperuricemia and gout are caused by known specific enzymatic defects. Many are associated with metabolic alterations that occur in obesity. When extreme, the gouty process results in large deposits of uric acid, or tophi, around joints. Hobbit (704 words) Essay NONINFLAMMATORY CONNECTIVE TISSUE DISEASES The joints and other connective tissues can be involved by trauma, endocrine disorders, metabolic abnormalities, congenital deformities, and other disease processes. The most important one is degenerative joint disease (OSTEOARTHRITIS). Degenerative Joint Disease. This is the most common form of arthritis and affects virtually all older adults to one degree or another. Most have few, if any associated symptoms, and the disease is diagnosed only because X rays of the vertebrae show characteristic spurs or because the fingers are knobbed by bony proliferations (Heberdens nodes) at the distal interphalangeal joints. In some the spurs encroach on nerves as they emerge from the spinal canal and produce nerve-root syndromes. In others, the malpositioned joints are a source of ligamentous strain and abnormal muscular tension. The result is pain that becomes worse as the day goes on. Occasionally a severe form of the disease affects the hips. The destructive process results in restricted mobility of the hip joints and disabling pain, and major surgery may be required. The destroyed tissue is removed and replaced by a new joint made of plastic, an operation that is usually dramatically effective. Degenerative processes affect the ligaments and intervertebral disks of the spine. If a disk slips out, the syndrome of herniated disk may ensue. This is common in middle-aged men and usually affects the lumbar vertebrae, producing nerve-root irritation and ligamentous strain with resultant low-back pain and neurological deficits. Unless the symptoms remit with rest and analgesics, the disk may need to be surgically removed. These degenerative processes are in part caused by wear and tear. They affect primarily weight-bearing joints and joints subject to trauma or to malpositioned anatomy. Joints damaged by other forms of arthritis are prone to later degenerative joint disease. Heberdens nodes are more prominent in the right hand of right-handed individuals and in the fingers of typists. Traumas produce microfractures in the cartilage that lines the articulating surfaces exposing raw underlying bone. The bone cells then release enzymes that destroy the protein and polysaccharide components of bone. Frayed pieces of cartilage may be taken up by white blood cells and thus add an element of inflammation. TREATMENT OF ARTHRITIS Accurate diagnosis and proper treatment usually follow naturally from the history, physical exam, and laboratory tests and from consideration of the pathophysiologic mechanisms. Infectious arthritis usually responds dramatically to appropriate antibiotics. The noninfectious inflammatory diseases are treated with drugs that suppress inflammation. Many of these drugs, for example, aspirin, indomethacin, and ibuprofen, appear to work by inhibiting synthesis of prostaglandins that mediate inflammation. Although certain adrenal cortical steroids are powerful inhibitors of inflammation, toxic side effects limit their usefulness. Similarly, drugs that inhibit proliferation of cells in the inflammatory masses have potentially severe side effects. Drugs that inhibit undesirable inflammation may also inhibit desired inflammatory responses. A result is a high frequency of secondary infections. More specific therapy, for example, allopurinol and colchicine in gout, is dependent on knowledge of the precise biochemical mechanisms of disease pathogenesis. Researchers are also studying the use of drugs that act on the nervous system. Despite the wear-and-tear origin of degenerative joint disease, it, too, may respond well to so-called anti-inflammatory drugs. Perhaps they are primarily acting as analgesics (pain-killers), or they may act by decreasing the secondary inflammation that follows joint trauma. Franklin Mullinax Bibliography: Arthritis Foundation, Understanding Arthritis (1986); Kelley, William N., et al. , eds., Textbook of Rheumatology, 2d ed., (1985); McCarty, Daniel F., ed. , Arthritis and Allied Conditions, 11th ed. (1988); Moll, J. M. H., Rheumatology in Clinical Practice (1987).Category: Science

Thursday, November 28, 2019

The Top Athletes Looking for an Edge and the Scien Essays - Sports

The Top Athletes Looking for an Edge and the Scientists Trying to Catch Them. Behind the scenes there will be a high-tech, high-stakes competition between Olympic athletes who use banned substances and drug testers out to catch them ByChristie Aschwanden Smithsonian Magazine | Subscribe July 2012 D eeDee Trotter was on an airplane in 2006 when she overheard a passenger seated behind her discussing the steroids scandal. Federal investigators in the Balco case, named for a lab that produced supplements, would eventually implicate more than two dozen athletes for the use of performance-enhancing drugs, including Barry Bonds, baseball's home run king, and Marion Jones, the track-and-field star, who would end up in jail, stripped of five Olympic medals. "This guy was reading the newspaper and he said, Oh, they're all on drugs,'" recalls Trotter, a runner who won a gold medal in the 4 x 400 meter relay at the 2004 Olympics. She was furious. "I turned around and said, Heyexcuse me, I'm sorry, but that's not true. I'm a professional athlete and Olympic gold medalist, and I'm not on drugs. I've never even considered it. ' " Currently vying to join the U.S. team and appear in her third Olympics, Trotter projects a sassy confidence. "It really upset me that it's perceived that waythat if she runs fast, then she's on drugs. I hated that and I gave him a little attitude." That airplane conversation prompted Trotter to create a foundation called Test Me, I'm Clean! "It gave us clean athletes a chance to defend ourselves," says Trotter. "If you see someone wearing this wristband"she holds up a rubbery white bracelet emblazoned with the group's name " it means that I am a clean athlete. I do this with hard work, honesty and honor. I don't take any outside substances." As Trotter tells me this story, I catch myself wondering if it's all just a bunch of pre-emptive PR. It pains me to react this way, but with doping scandals plaguing the past three Summer Olympics and nearly every disgraced athlete insisting, at least initially, that he or she is innocent, it's hard to take such protestations at face value. My most profound disillusionment came from a one-time friend, Tyler Hamilton, my teammate on the University of Colorado cycling team. When he won a gold medal in the time trial at the 2004 Olympics, I was thrilled to see someone I'd admired as honest and hardworking reach the top of a sport that had been plagued by doping scandals. But in the days that followed, a new test implicated Hamilton for blood doping. His supporters began hawking "I Believe Tyler" T-shirts, and he took donations from fans to fund his defense. The evidence against him seemed indisputable, but the Tyler I knew in college was not a cheat or liar. So I asked him straight-out if he was guilty. He looked me in the eye and told me he didn't do it. Last year, after being subpoenaed by federal investigators, Hamilton finally confessed and returned his medal. The downfall of Olympic heroes has cast a cloud of suspicion over sports. And the dopers' victims aren't just the rivals from whom they stole their golden podium moments but every clean athlete whose performance is greeted with skepticism. Doping, or using a substance to enhance performance, is nothing new. Contrary to romantic notions about the purity of Olympic sports, ancient Greeks ingested special drinks and potions to give them an edge, and at the 1904 Games, athletes downed potent mixtures of cocaine, heroin and strych - nine. For most of Olympic history, using drugs wasn't considered cheating. Then, in the 1960 Olympics, Danish cyclist Knut Jensen passed out during a race, cracked his skull and later died. The coroner blamed the death on amphetamines, and the case led to anti-doping rules. Drug testing began with the 1968 Games, with a goal to protect athlete health. In addition to short-term damage, certain drugs also appear to increase the risk of heart disease and possibly cancer. The original intent of anti-doping rules was to prevent athletes from dropping dead of overdoses, but over the years the rules have come to focus just as intently on

Monday, November 25, 2019

Mid-Autumn Festival Essays - Autumn, Public Holidays In China

Mid-Autumn Festival Essays - Autumn, Public Holidays In China Mid-Autumn Festival The Mid-Autumn Festival occurs every year on the fifteenth day of the eighth month. This date is in respect to the lunar calendar which is used by the Chinese. In the Gregorian calendar, used in America, this day would be approximately the fifteenth of September. On this day, the moon is supposed to be at its fullest and brightest of the year. The whole family eats out or in their yards to celebrate and watch the full moon. Children play with paper lanterns and the same lanterns are hung outside the front doors of buildings, such as houses and restaurants. Mooncakes are eaten and Chinese tea is usually used to wash it down. The name, mooncake, is self-explanatory. It is a round cake, in the shape of a moon. The ingredients of the cake consist of lotus seeds, made into a sort of paste. The paste is surrounded by a crust, which usually has four Chinese characters imprinted on the top. These characters either tell the type of mooncake it is (i.e. regular, lotus with egg yolk), the name of the store it was bought from, or just simply says ?mooncake?. The origin of the mooncake is in China, during the Sang Dynasty. The Han people were conquered by the Mongolians and named the new dynasty Yuan. The Han people did not like living under Mongolian rule. Therefore, they wanted to rebel and retake China. However, the Mongolians had taken this into consideration and did not allow the people to communicate (especially public gatherings) or to possess sharp, pointed weaponry. Thus, the people had to find a way of communicating secretly. One group of men thought up the idea of placing a piece of paper with the date of the rebellion inside little cakes, which they would sell to the people, who would read the paper and find out the date. To gain permission from the Mongolian soldiers to sell the cakes, they told them that the cakes were a sort of offering to the gods. They said that they would pray that the Mongolian emperor could have eternal life. The gullible soldiers quickly agreed. Everyone received the cakes and the rebellion date was set for the fifteenth day of the eighth month. Since the Mongolians could not read Chinese, they did not know of the rebellion, were caught by surprise, and defeated. From then on, the fifteenth day of the eighth month was known as the day of the Mid-Autumn Festival to celebrate the day of the rebellion. Many myths are formed about holidays. One which goes with this holiday is about a time when the world had ten suns and the earth was hot and dry. Nothing could survive. A hero stepped forward and used nine arrows to shoot down nine of the suns. He was crowned king and married a beautiful wife. Within years of his reign, he became selfish and greedy, a dictator. He wanted to live forever and make the people suffer. Therefore, he mixed a powerful potion and made a pill which, when eaten, would give the person eternal life. His wife found him out and stole the pill. To keep her husband from eating it, she ate it herself. However, after she ate it, she felt her body get lighter and lighter until she was floating. She kept rising higher and higher until she reached the moon, where she lives until this day. There are many variations of this story, such as the bringing of a rabbit with her because the gods wanted to reward her bravery by giving her company for her loneliness. Some people say that they can sometimes see a woman in the moon with a rabbit and a tree (another variation).

Thursday, November 21, 2019

Assignment B wk3 Essay Example | Topics and Well Written Essays - 750 words

Assignment B wk3 - Essay Example It is biblically that God is a God of truth and His reverence must be in observance with that truth. To worship refers to demonstrate honor and respect to God, and at what moment in his physical presence, that denotes to prostrate oneself in a way in which one demonstrates his supremacy over oneself (MacArthur, 1983). Worshipping in truth is to show adoration to him in human nature through the actions. The concept of praising God in spirit and truth generates from Jesus’ discussion with the lady at the well in John 4. In the discussion, the lady was talk about places of worship with Jesus, claiming that the Samaritans worshipped at Mount Gerizim while the Jews prayed at Jerusalem. Jesus had just demonstrated that He understood about her numerous spouses, also the fact that the present man she stayed with was not her spouse (MacArthur, 1983). This made her uneasy, so she tried to sidetrack His attention from her private life to issues of religion. Jesus did not get distracted from His session on right worship and got to the core of the issue when he said that the hour was coming, and it was already time, when the true worshipers would adore God in truth and spirit, for God needs such to worship Him in John 4. The overall message concerning worshipping God in spirit and truth is that adoration of the Father is not to be restricted to a solitary geographical site or essentially controlled by the temporary requirements of bible law (MacArthur, 1983). With the presence of Christ, the severance between Gentile and Jew was no longer pertinent, nor was the site of worship as the temple. With the Christ, all of God’s believers gained equivalent admission to God through Jesus. Worship became an issue of the spirit, not outside actions, and bound for by reality rather than ceremonial. True worship ought to be in spirit that

Wednesday, November 20, 2019

Psychology of religion Essay Example | Topics and Well Written Essays - 1250 words

Psychology of religion - Essay Example needs to acts in his own interests and to ward off things that will harm him.† (Retrieved from www.englishforums.com) Different communities maintain different religious beliefs, though attributes affiliated with the Supreme Being are similar to some extent. The same is the case with Christianity. Christianity is the most popular religion of the globe, as its followers are highest in number to all other faiths existing in the contemporary world. Christians have extracted the attributes of God from Biblical themes and stories. But the philosophers contain diversified opinion regarding the background, description, existence and executions of these characteristics. Hodgson & King (1985) have discussed the philosophical views of eminent theologian and philosopher Thomas Aquinas of 13th century with the work of contemporary thinker Gordon Kaufman, in a comparative way, in their famous work under the title â€Å"Readings In Christian Mythology.† The work concentrates on the religious aspects of Christianity with reference of religious themes and beliefs in order to show the relation of human characteristics with those attributed with Almighty God. St. Aquinas is of the view that the merit and demerit of all the qualities obtained and possessed by humans have been determined as good or bad by Almighty God. In other words, it is no t man that decides an action, an idea, a notion or a concept as good or bad; rather, these qualities have already been decided by the Lord, on the basis of which all the actions, activities and attitude of human beings are regulated and maintained. â€Å"All that man is, and can, and has†, Aquinas suggests, â€Å"must be referred to God; and therefore every action of man, whether good or bad, acquires merit or demerit in the sight of God, as far as the action itself is concerned.† (Quoted in Porter, 1997: 212) In the same way, Aquinas submits that no words in any language can portray the attributes of God. On the other hand, man has learnt and

Monday, November 18, 2019

The Relationship between Employee Commitment and Employee Engagement, Assignment

The Relationship between Employee Commitment and Employee Engagement, Employee Satisfaction - Assignment Example It also can be referred as creating a healthy work environment for the employees in order to motivate them. It will help the employees to connect with their work and job responsibilities (Storey, Wright and Ulrich, 2009, p.300). On the other hand, commitment can be defined as willingness to persevere in a course of reluctance and action to change plans. The employees devote their energy and time to fulfil their job responsibility as well as their personal, community, family and spiritual obligations. Employees, who are committed to their organizations and highly engaged in their job, provide effective competitive advantages to the organizations in terms of higher output. Uncommitted employees do not bother about workplace performance and outputs. On the other hand, the committed employees tend to provide their total effort to fulfil their personal career goals and job responsibility. Engagement of an employee cannot possible without effective commitment towards the organization and s eer hard work. Leaders or the managers of an organization play a vital role in employee engagement. It is important for a manager to provide value to the needs or satisfaction level of an employee in order to retrain employee commitment and employee engagement. Only a motivated employee can perform effectively in an organization. ... It will help an organization to achieve success (Mannelly, 2009, p.161). Committed employees are more engaged to their job and organization comparing to the uncommitted employees. Employee engagement, employer practices, work performance and business results are highly related to each other. It is the responsibility of the employers to motivate their employees to perform efficiently. Effective performance appraisal, incentive systems, career growth opportunities are the motivation and performance drivers for an employee in an organization. These aspects made an employee committed to their job. Committed employees provide their best performance in order to capitalize on the potential career opportunities. Therefore, it can be stated that, effective employee engagement can help an organization to increase its business productivity. Effective performance appraisal system increases the commitment level of an employee. It is evident that the global workplace behaviour is changing dramatic ally (Albrecht, 2010, p.67). Now-a-days, the customers are trying to achieve value added and high quality products and services. Therefore, the global organizations are trying to motivate their workforce in order to meet with the demand of the customers. The uncommitted employees cannot perform effectively due to lack of workplace motivation. As the skilled and motivated employees are the biggest assets of an organization, therefore it is responsibility of the organization to take care of their needs. Therefore, it can be concluded that committed employees are more engaged with their work and responsibilities than the uncommitted employees. Is it correct to say that Committed Employees are more satisfied than

Friday, November 15, 2019

Single Stage to Orbit (SSTO) Propulsion System

Single Stage to Orbit (SSTO) Propulsion System This paper discusses the relevant selection criteria for a single stage to orbit (SSTO) propulsion system and then reviews the characteristics of the typical engine types proposed for this role against these criteria. The engine types considered include Hydrogen/Oxygen (H2/O2) rockets, Scramjets, Turbojets, Turborockets and Liquid Air Cycle Engines. In the authors opinion none of the above engines are able to meet all the necessary criteria for an SSTO propulsion system simultaneously. However by selecting appropriate features from each it is possible to synthesise a new class of engines which are specifically optimised for the SSTO role. The resulting engines employ precooling of the airstream and a high internal pressure ratio to enable a relatively conventional high pressure rocket combustion chamber to be utilised in both airbreathing and rocket modes. This results in a significant mass saving with installation advantages which by careful design of the cycle thermodynamics enable s the full potential of airbreathing to be realised. The SABRE engine which powers the SKYLON launch vehicle is an example of one of these so called Precooled hybrid airbreathing rocket engines and the conceptual reasoning which leads to its main design parameters are described in the paper. Keywords: Reusable launchers, SABRE, SKYLON, SSTO 1.Introduction Several organisations world-wide are studying the technical and commercial feasibility of reusable SSTO launchers. This new class of vehicles appear to offer the tantalising prospect of greatly reduced recurring costs and increased reliability compared to existing expendable vehicles. However achieving this breakthrough is a difficult task since the attainment of orbital velocity in a re-entry capable single stage demands extraordinary propulsive performance. Most studies to date have focused on high pressure hydrogen/oxygen (H2/O2) rocket engines for the primary propulsion of such vehicles. However it is the authors opinion that despite recent advances in materials technology such an approach is not destined to succeed, due to the relatively low specific impulse of this type of propulsion. Airbreathing engines offer a possible route forward with their intrinsically higher specific impulse. However their low thrust/weight ratio, limited Mach number range and high dynamic pressure trajectory have in the past cancelled any theoretical advantage. By design review of the relevant characteristics of both rockets and airbreathing engines this paper sets out the rationale for the selection of deeply precooled hybrid airbreathing rocket engines for the main propulsion system of SSTO launchers as exemplified by the SKYLON vehicle [1]. 2. Propulsion Candidates This paper will only consider those engine types which would result in politically and environmentally acceptable vehicles. Therefore engines employing nuclear reactions (eg: onboard fission reactors or external nuclear pulse) and chemical engines with toxic exhausts (eg: fluorine/oxygen) will be excluded. The candidate engines can be split into two broad groups, namely pure rockets and engines with an airbreathing component. Since none of the airbreathers are capable of accelerating an SSTO vehicle all the way to orbital velocity, a practical vehicle will always have an onboard rocket engine to complete the ascent. Therefore the use of airbreathing has always been proposed within the context of improving the specific impulse of pure rocket propulsion during the initial lower Mach portion of the trajectory. Airbreathing engines have a much lower thrust/ weight ratio than rocket engines (à ¢Ã¢â‚¬ °Ã‹â€ 10%) which tends to offset the advantage of reduced fuel consumption. Therefore vehicles with airbreathing engines invariably have wings and employ a lifting trajectory in order to reduce the installed thrust requirement and hence the airbreathing engine mass penalty. The combination of wings and airbreathing engines then demands a low flat trajectory (compared to a ballistic rocket trajectory) in order to maximise the installed performance (i.e. (thrust-drag)/fuel flow). This high dynamic pressure trajectory gives rise to one of the drawbacks of an airbreathing approach since the airframe heating and loading are increased during the ascent which ultimately reflects in increased structure mass. However the absolute level of mass growth depends on the relative severity of the ascent as compared with reentry which in turn is mostly dependant on the type of airbreathing engine selected. An a dditional drawback to the low trajectory is increased drag losses particularly since the vehicle loiters longer in the lower atmosphere due to the lower acceleration, offset to some extent by the much reduced gravity loss during the rocket powered ascent. Importantly however, the addition of a set of wings brings more than just performance advantages to airbreathing vehicles. They also give considerably increased abort capability since a properly configured vehicle can remain in stable flight with up to half of its propulsion systems shutdown. Also during reentry the presence of wings reduces the ballistic coefficient thereby reducing the heating and hence thermal protection system mass, whilst simultaneously improving the vehicle lift/drag ratio permitting greater crossrange. The suitability of the following engines to the SSTO launcher role will be discussed since these are representative of the main types presently under study within various organisations world-wide: Liquid Hydrogen/Oxygen rockets Ramjets and Scramjets Turbojets/Turborockets and variants Liquid Air Cycle Engines (LACE) and Air Collection Engines (ACE) Precooled hybrid airbreathing rocket engines (RB545/SABRE) 3.Selection Criteria The selection of an optimum propulsion system involves an assessment of a number of interdependant factors which are listed below. The relative importance of these factors depends on the severity of the mission and the vehicle characteristics. Engine performance Useable Mach number and altitude range. Installed specific impulse. Installed thrust/weight. Performance sensitivity to component level efficiencies. Engine/Airframe integration Effect on airframe layout (Cg/Cp pitch trim structural efficiency). Effect of required engine trajectory (Q and heating) on airframe technology/materials. Technology level Materials/structures/aerothermodynamic and manufacturing technology. Development cost Engine scale and technology level. Complexity and power demand of ground test facilities. Necessity of an X plane research project to precede the main development program. 4.Hydrogen/Oxygen Rocket Engines Hydrogen/oxygen rocket engines achieve a very high thrust/weight ratio (60-80) but relatively low specific impulse (450-475 secs in vacuum) compared with conventional airbreathing engines. Due to the relatively large à ¢Ã‹â€ Ã¢â‚¬  V needed to reach low earth orbit (approx 9 km/s including gravity and drag losses) in relation to the engine exhaust velocity, SSTO rocket vehicles are characterised by very high mass ratios and low payload fractions. The H2/O2 propellant combination is invariably chosen for SSTO rockets due to its higher performance than other alternatives despite the structural penalties of employing a very low density cryogenic fuel. In order to maximise the specific impulse, high area ratio nozzles are required which inevitably leads to a high chamber pressure cycle in order to give a compact installation and reduce back pressure losses at low altitude. The need to minimise back pressure losses normally results in the selection of some form of altitude compensating nozzle since conventional bell nozzles have high divergence and overexpansion losses when running in a separated condition. The high thrust/weight and low specific impulse of H2/O2 rocket engines favours vertical takeoff wingless vehicles since the wing mass and drag penalty of a lifting trajectory results in a smaller payload than a steep ballistic climb out of the atmosphere. The ascent trajectory is therefore extremely benign (in terms of dynamic pressure and heating) with vehicle material selection determined by re-entry. Relative to airbreathing vehicles a pure rocket vehicle has a higher density (gross take off weight/volume) due to the reduced hydrogen consumption which has a favourable effect on the tankage and thermal protection system mass. In their favour rocket engines represent broadly known (current) technology, are ground testable in simple facilities, functional throughout the whole Mach number range and physically very compact resulting in good engine/airframe integration. Abort capability for an SSTO rocket vehicle would be achieved by arranging a high takeoff thrust/weight ratio (eg: 1.5) and a large number of engines (eg: 10) to permit shutdown of at least two whilst retaining overall vehicle control. From an operational standpoint SSTO rockets will be relatively noisy since the high takeoff mass and thrust/weight ratio results in an installed thrust level up to 10 times higher than a well designed airbreather. Reentry should be relatively straightforward providing the vehicle reenters base first with active cooling of the engine nozzles and the vehicle base. However the maximum lift/drag ratio in this attitude is relatively low (approx 0.25) limiting the maximum achievable crossrange to around 250 km. Having reached a low altitude some of the main engines would be restarted to control the subsonic descent before finally effecting a tailfirst landing on legs. Low crossrange is not a particular problem providing the vehicle operator has adequate time to wait for the orbital plane to cross the landing site. However in the case of a military or commercial operator this could pose a serious operational restriction and is consequently considered to be an undesirable characteristic for a new launch vehicle. In an attempt to increase the crossrange capability some designs attempt nosefirst re-entry of a blunt cone shaped vehicle or alternatively a blended wing/body configuration. This approach potentially increases the lift/drag ratio by reducing the fuselage wave drag and/or increasing the aerodynamic lift generation. However the drawback to this approach is that the nosefirst attitude is aerodynamically unstable since the aft mounted engine package pulls the empty center of gravity a considerable distance behind the hypersonic center of pressure. The resulting pitching moment is difficult to trim without adding nose ballast or large control surfaces projecting from the vehicle base. It is expected that the additional mass of these components is likely to erode the small payload capability of this engine/vehicle combination to the point where it is no longer feasible. Recent advances in materials technology (eg: fibre reinforced plastics and ceramics) have made a big impact on the feasibility of these vehicles. However the payload fraction is still very small at around 1-2% for an Equatorial low Earth orbit falling to as low as 0.25% for a Polar orbit. The low payload fraction is generally perceived to be the main disadvantage of this engine/vehicle combination and has historically prevented the development of such vehicles, since it is felt that a small degree of optimism in the preliminary mass estimates may be concealing the fact that the real payload fraction is negative. One possible route forward to increasing the average specific impulse of rocket vehicles is to employ the atmosphere for both oxidiser and reaction mass for part of the ascent. This is an old idea dating back to the 1950s and revitalised by the emergence of the BAe/Rolls Royce HOTOL project in the 1980s [2]. The following sections will review the main airbreathing engine candidates and trace the design background of precooled hybrid airbreathing rockets. 5.Ramjet and Scramjet Engines A ramjet engine is from a thermodynamic viewpoint a very simple device consisting of an intake, combustion and nozzle system in which the cycle pressure rise is achieved purely by ram compression. Consequently a separate propulsion system is needed to accelerate the vehicle to speeds at which the ramjet can takeover (Mach 1-2). A conventional hydrogen fuelled ramjet with a subsonic combustor is capable of operating up to around Mach 5-6 at which point the limiting effects of dissociation reduce the effective heat addition to the airflow resulting in a rapid loss in nett thrust. The idea behind the scramjet engine is to avoid the dissociation limit by only partially slowing the airstream through the intake system (thereby reducing the static temperature rise) and hence permitting greater useful heat addition in the now supersonic combustor. By this means scramjet engines offer the tantalising prospect of achieving a high specific impulse up to very high Mach numbers. The consequent de crease in the rocket powered à ¢Ã‹â€ Ã¢â‚¬  V would translate into a large saving in the mass of liquid oxygen required and hence possibly a reduction in launch mass. Although the scramjet is theoretically capable of generating positive nett thrust to a significant fraction of orbital velocity it is unworkable at low supersonic speeds. Therefore it is generally proposed that the internal geometry be reconfigured to function as a conventional ramjet to Mach 5 followed by transition to scramjet mode. A further reduction of the useful speed range of the scramjet results from consideration of the nett vehicle specific impulse ((thrust-drag)/fuel flow) in scramjet mode as compared with rocket mode. This tradeoff shows that it is more effective to shut the scramjet down at Mach 12-15 and continue the remainder of the ascent on pure rocket power. Therefore a scramjet powered launcher would have four main propulsion modes: a low speed accelerator mode to ramjet followed by scramjet and finally rocket mode. The proposed low speed propulsor is often a ducted ejector rocket system employing the scramjet injector struts as both ejector nozzles to entrain air at low speeds and later as the rocket combustion chambers for the final ascent. Whilst the scramjet engine is thermodynamically simple in conception, in engineering practice it is the most complex and technically demanding of all the engine concepts discussed in this paper. To make matters worse many studies including the recent ESA Winged Launcher Concept study have failed to show a positive payload for a scramjet powered SSTO since the fundamental propulsive characteristics of scramjets are poorly suited to the launcher role. The low specific thrust and high specific impulse of scramjets tends to favour a cruise vehicle application flying at fixed Mach number over long distances, especially since this would enable the elimination of most of the variable geometry. Scramjet engines have a relatively low specific thrust (nett thrust/airflow) due to the moderate combustor temperature rise and pressure ratio, and therefore a very large air mass flow is required to give adequate vehicle thrust/weight ratio. However at constant freestream dynamic head the captured air mass flow reduces for a given intake area as speed rises above Mach 1. Consequently the entire vehicle frontal area is needed to serve as an intake at scramjet speeds and similarly the exhaust flow has to be re-expanded back into the original streamtube in order to achieve a reasonable exhaust velocity. However employing the vehicle forebody and aftbody as part of the propulsion system has many disadvantages: The forebody boundary layer (up to 40% of the intake flow) must be carried through the entire shock system with consequent likelihood of upsetting the intake flow stability. The conventional solution of bleeding the boundary layer off would be unacceptable due to the prohibitive momentum drag penalty. The vehicle undersurface must be flat in order to provide a reasonably uniform flowfield for the engine installation. The flattened vehicle cross section is poorly suited to pressurised tankage and has a higher surface area/volume than a circular cross section with knock-on penalties in aeroshell, insulation and structure mass. Since the engine and airframe are physically inseparable little freedom is available to the designer to control the vehicle pitch balance. The single sided intake and nozzle systems positioned underneath the vehicle generate both lift and pitching moments. Since it is necessary to optimise the intake and nozzle system geometry to maximise the engine performance it is extremely unlikely that the vehicle will be pitch balanced over the entire Mach number range. Further it is not clear whether adequate CG movement to trim the vehicle could be achieved by active propellant transfer. Clustering the engines into a compact package underneath the vehicle results in a highly interdependant flowfield. An unexpected failure in one engine with a consequent loss of internal flow is likely to unstart the entire engine installation precipitating a violent change in vehicle pitching moment. In order to focus the intake shock system and generate the correct duct flow areas over the whole Mach range, variable geometry intake/combustor and nozzle surfaces are required. The large variation in flow passage shape forces the adoption of a rectangular engine cross section with flat moving ramps thereby incurring a severe penalty in the pressure vessel mass. Also to maximise the installed engine performance requires a high dynamic pressure trajectory which in combination with the high Mach number imposes severe heating rates on the airframe. Active cooling of significant portions of the airframe will be necessary with further penalties in mass and complexity. Further drawbacks to the scramjet concept are evident in many areas. The nett thrust of a scramjet engine is very sensitive to the intake, combustion and nozzle efficiencies due to the exceptionally poor work ratio of the cycle. Since the exhaust velocity is only slightly greater than the incoming freestream velocity a small reduction in pressure recovery or combustion efficiency is likely to convert a small nett thrust into a small nett drag. This situation might be tolerable if the theoretical methods (CFD codes) and engineering knowledge were on a very solid footing with ample correlation of theory with experiment. However the reality is that the component efficiencies are dependant on the detailed physics of poorly understood areas like flow turbulence, shock wave/boundary layer interactions and boundary layer transition. To exacerbate this deficiency in the underlying physics existing ground test facilities are unable to replicate the flowfield at physically representative sizes , forcing the adoption of expensive flight research vehicles to acquire the necessary data. Scramjet development could only proceed after a lengthy technology program and even then would probably be a risky and expensive project. In 1993 Reaction Engines estimated that a 130 tonne scramjet vehicle development program would cost $25B (at fixed prices) assuming that the program proceeded according to plan. This program would have included two X planes, one devoted to the subsonic handling and low supersonic regime and the other an air dropped scramjet research vehicle to explore the Mach 5-15 regime. 6.Turbojets, Turborockets and Variants In this section are grouped those engines that employ turbocompressors to compress the airflow but without the aid of precoolers. The advantage of cycles that employ onboard work transfer to the airflow is that they are capable of operation from sea level static conditions. This has important performance advantages over engines employing solely ram compression and additionally enables a cheaper development program since the mechanical reliability can be acquired in relatively inexpensive open air ground test facilities. 6.1 Turbojets Turbojets (Fig. 1) exhibit a very rapid thrust decay above about Mach 3 due to the effects of the rising compressor inlet temperature forcing a reduction in both flow and pressure ratio. Compressors must be operated within a stable part of their characteristic bounded by the surge and choke limits. In addition structural considerations impose an upper outlet temperature and spool speed limit. As inlet temperature rises (whilst operating at constant Wà ¢Ã‹â€ Ã… ¡T/P and N/à ¢Ã‹â€ Ã… ¡T) the spool speed and/or outlet temperature limit is rapidly approached. Either way it is necessary to throttle the engine by moving down the running line, in the process reducing both flow and pressure ratio. The consequent reduction in nozzle pressure ratio and mass flow results in a rapid loss in nett thrust. However at Mach 3 the vehicle has received an insufficient boost to make up for the mass penalty of the airbreathing engine. Therefore all these cycles tend to be proposed in conjunction with a subsonic combustion ramjet mode to higher Mach numbers. The turbojet would be isolated from the hot airflow in ramjet mode by blocker doors which allow the airstream to flow around the core engine with small pressure loss. The ramjet mode provides reasonable specific thrust to around Mach 6-7 at which point transition to rocket propulsion is effected. Despite the ramjet extension to the Mach number range the performance of these systems is poor due mainly to their low thrust/weight ratio. An uninstalled turbojet has a thrust/weight ratio of around 10. However this falls to 5 or less when the intake and nozzle systems are added which compares badly with a H2/O2 rocket of 60+. 6.2 Turborocket The turborocket (Fig. 2) cycles represent an attempt to improve on the low thrust/weight of the turbojet and to increase the useful Mach number range. The pure turborocket consists of a low pressure ratio fan driven by an entirely separate turbine employing H2/O2 combustion products. Due to the separate turbine working fluid the matching problems of the turbojet are eased since the compressor can in principle be operated anywhere on its characteristic. By manufacturing the compressor components in a suitable high temperature material (such as reinforced ceramic) it is possible to eliminate the ramjet bypass duct and operate the engine to Mach 5-6 whilst staying within outlet temperature and spool speed limits. In practice this involves operating at reduced nondimensional speed N/à ¢Ã‹â€ Ã… ¡T and hence pressure ratio. Consequently to avoid choking the compressor outlet guide vanes a low pressure ratio compressor is selected (often only 2 stages) which permits operation over a wider flow range. The turborocket is considerably lighter than a turbojet. However the low cycle pressure ratio reduces the specific thrust at low Mach numbers and in conjunction with the preburner liquid oxygen flow results in a poor specific impulse compared to the turbojet. 6.3 Expander Cycle Turborocket This cycle is a variant of the turborocket whereby the turbine working fluid is replaced by high pressure regeneratively heated hydrogen warmed in a heat exchanger located in the exhaust duct (Fig. 3). Due to heat exchanger metal temperature limitations the combustion process is normally split into two stages (upstream and downstream of the ma- LHLH LOx/LH2 Fig. 1 Turbo-ramjet Engine (with integrated rocket engine). LOx/LH2LH2 LOx/LH2 Fig. 2 Turborocket. LH2LOx/LH2 Fig. 3 Turbo-expander engine. trix) and the turbine entry temperature is quite low at around 950K. This variant exhibits a moderate improvement in specific impulse compared with the pure turborocket due to the elimination of the liquid oxygen flow. However this is achieved at the expense of additional pressure loss in the air ducting and the mass penalty of the heat exchanger. Unfortunately none of the above engines exhibit any performance improvement over a pure rocket approach to the SSTO launcher problem, despite the wide variations in core engine cycle and machinery. This is for the simple reason that the core engine masses are swamped by the much larger masses of the intake and nozzle systems which tend to outweigh the advantage of increased specific impulse. Due to the relatively low pressure ratio ramjet modes of these engines, it is essential to provide an efficient high pressure recovery variable geometry intake and a variable geometry exhaust nozzle. The need for high pressure recovery forces the adoption of 2 dimensional geometry for the intake system due to the requirement to focus multiple oblique shockwaves over a wide mach number range. This results in a very serious mass penalty due to the inefficient pressure vessel cross section and the physically large and complicated moving ramp assembly with its high actuation loads. Similarly the exhaust nozzle geometry must be capable of a wide area ratio variation in order to cope with the widely differing flow conditions (Wà ¢Ã‹â€ Ã… ¡T/P and pressure ratio) between transonic and high Mach number flight. A further complication emerges due to the requirement to integrate the rocket engine needed for the later ascent into the airbreathing engine nozzle. This avoids the prohibitive base drag penalty that would result from a separate dead nozzle system as the vehicle attempted to accelerate through transonic. 7. Liquid Air Cycle Engines (LACE) and Air Collection Engines (ACE) Liquid Air Cycle Engines were first proposed by Marquardt in the early 1960s. The simple LACE engine exploits the low temperature and high specific heat of liquid hydrogen in order to liquify the captured airstream in a specially designed condenser (Fig. 4). Following liquifaction the air is relatively easily pumped up to such high pressures that it can be fed into a conventional rocket combustion chamber. The main advantage of this approach is that the airbreathing and rocket propulsion systems can be combined with only a single nozzle required for both modes. This results in a mass saving and a compact installation with efficient base area utilisation. Also the engine is in principle capable of operation from sea level static conditions up to perhaps Mach 6-7. LH2 LO2 Liquid Air Turbopump Fig. 4 Liquid Air Cycle Engine (LACE). The main disadvantage of the LACE engine however is that the fuel consumption is very high (compared to other airbreathing engines) with a specific impulse of only about 800 secs. Condensing the airflow necessitates the removal of the latent heat of vaporisation under isothermal conditions. However the hydrogen coolant is in a supercritical state following compression in the turbopump and absorbs the heat load with an accompanying increase in temperature. Consequently a temperature pinch point occurs within the condenser at around 80K and can only be controlled by increasing the hydrogen flow to several times stoichiometric. The air pressure within the condenser affects the latent heat of vaporisation and the liquifaction temperature and consequently has a strong effect on the fuel/air ratio. However at sea level static conditions of around 1 bar the minimum fuel/air ratio required is about 0.35 (ie: 12 times greater than the stoichiometric ratio of 0.029) assuming that the hydrogen had been compressed to 200 bar. Increasing the air pressure or reducing the hydrogen pump delivery pressure (and temperature) could reduce the fuel/ air ratio to perhaps 0.2 but nevertheless the fuel flow remains very high. At high Mach numbers the fuel flow may need to be increased further, due to heat exchanger metal temperature limitations (exacerbated by hydrogen embrittlement limiting the choice of tube materials). To reduce the fuel flow it is sometimes proposed to employ slush hydrogen and recirculate a portion of the coolant flow back into the tankage. However the handling of slush hydrogen poses difficult technical and operational problems. From a technology standpoint the main challenges of the simple LACE engine are the need to prevent clogging of the condenser by frozen carbon dioxide, argon and water vapour. Also the ability of the condenser to cope with a changing g vector and of designing a scavenge pump to operate with a very low NPSH inlet. Nevertheless performance studies of SSTOs equipped with LACE engines have shown no performance gains due to the inadequate specific impulse in airbreathing mode despite the reasonable thrust/weight ratio and Mach number capability. The Air Collection Engine (ACE) is a more complex variant of the LACE engine in which a liquid oxygen separator is incorporated after the air liquifier. The intention is to takeoff with the main liquid oxygen tanks empty and fill them during the airbreathing ascent thereby possibly reducing the undercarriage mass and installed thrust level. The ACE principal is often proposed for parallel operation with a ramjet main propulsion system. In this variant the hydrogen fuel flow would condense a quantity of air from which the oxygen would be separated before entering the ramjet combustion chamber at a near stoichiometric mixture ratio. The liquid nitrogen from the separator could perform various cooling duties before being fed back into the ramjet airflow to recover the momentum drag. The oxygen separator would be a complex and heavy item since the physical properties of liquid oxygen and nitrogen are very similar. However setting aside the engineering details, the basic thermodynamics of the ACE principal are wholly unsuited to an SSTO launcher. Since a fuel/air mixture ratio of approximately 0.2 is needed to liquify the air and since oxygen is 23.1% of the airflow it is apparent that a roughly equal mass of hydrogen is required to liquify a given mass of oxygen. Therefore there is no saving in the takeoff propellant loading and in reality a severe structure mass penalty due to the increased fuselage volume needed to contain the low density liquid hydrogen. 8. Precooled Hybrid Airbreathing Rocket Engines This last class of engines is specifically formulated for the SSTO propulsion role and combines some of the best features of the previous types whilst simultaneously overcoming their faults. The first engine of this type was the RB545 powerpla