Wednesday, October 30, 2019

Professional management practice Essay Example | Topics and Well Written Essays - 1750 words

Professional management practice - Essay Example Hence the organization must be prepared to face the consequences. The SIMS.net is a good alternative for the manual register system. SIMS will provide more benefits to the school when compared to the existing manual register maintenance. Change management is a process of transition where the staffs, resources have to be modified according to the change. The changes in the organization must be specified well in advance. This will avoid the problems that may arise due to change. Management of change is a tedious task. The changes and the corresponding requirements must be known. (Wilhelm, 2003).This can be achieved by conducting a study in the organization. A change must be done based on a framework. (Rodd, 1994).This framework will have predefined set of tasks and methods that have to be followed. By using this framework the organization can proceed with the proposed changes. This will ensure that the modifications take place in a structured manner. There are various types of changes that may take place in an organization. The changes can be made on the organizational structure, technology that is being used or in the management policies. Before implementing a proposed change, the company must conduct a feasibility study. This study will help the management to know the current situation and position of the organization. Then based on the results of the study the organization can decide whether to implement this change or not. Change management involves several tasks. Once a change is made it has to be assessed on a periodical basis. This will ensure that the changed system works well and it does not incur any loss to the company. The change management must be planned accordingly so that the employees co-operate and work in the system. The staffs must be given proper training to work on the system. Change management is a process that consists of several steps. It is a step by step routine that helps to

Monday, October 28, 2019

Postpartum Stress Disorder Essay Example for Free

Postpartum Stress Disorder Essay The postpartum period has been defined as a bringing forth of the period following childbirth (Webster, 1988, p. 1055) or occurring after childbirth or after delivery, with reference to the mother (Doriand, 1988, p. 1343). In nursing or medical textbooks, the postpartum period is defined as the 6-week interval between the birth of the newborn and the return of the reproductive organs to their normal non-pregnant state (Wong Perry, 1998, p. 480). However, Tulman and Fawcetts (1991) found that the recovery of postpartum womens functional status from childbirth takes at least 3 to 6 months. Websters Dictionary defines stress concretely as a physical, mental, or emotional strain that disturbs ones normal bodily functions (Webster, 1997, p. 735). Stress is produced by stressors. Wheaton (1996) defines stressors as conditions of threat, demands, or structural constraints that, by the very fact of their occurrence or existence, call into question the operating integrity of the organism (p. 2). In addition, four characteristics of stressors are described: (1) threats, demands, or structural constraints; (2) a force challenging the integrity of the organism; (3) a problem that requires resolution; and, (4) identity relevant in threats in which the pressure exerted by the stressor, in part, derives its power from its potential to threaten or alter identities. Further, awareness of the damage potential of a stressor is not a necessary condition for that stressor having negative consequences; and a stressor can be defined bidirectional ly with respect to demand characteristics. That is, it is possible for both over-demand and under-demand to be stress problems (Wheaton, 1996). Accordingly, based on the above definitions of the postpartum period, stress, and stressors, postpartum stress is defined as a constraining force produced by postpartum stressors. Postpartum stressors are defined as conditions of change, demand, or structural constraint that, by the very fact of their occurrence or existence within six weeks after delivery, call into question the operating integrity of body changes, maternal role attainment, and social support. Due to its many adjustments, the postpartum period has been conceptualized as a time of vulnerability to stress for childbearing women (Too, 1997). Postpartum Period The postpartum period has been conceptualized by a variety of cultures as a time of vulnerability to stress for women (Hung and Chung, 2001). It is characterized by dramatic changes and requires mandatory adjustments that involve many difficulties and concerns, possibly leading to new demands, or structural constraints and, therefore, stress. All mothers face the multiple demands of adjusting to changes in the body, learning about the new infant, and getting support from significant others. For women going through this transition, it may be a uniquely stressful life experience. Several stressors specific to the puerperium as it exists in the literature have been identified. Those pertaining to body changes include: pain/discomfort, rest/sleep disturbances, diet, nutrition, physical restrictions, weight gain, return to prepregnancy physical shape, care of wounds, contraception, resuming sexual intercourse, discomfort of stitches, breast care, breast soreness, hemorrhoids, flabby subcutaneous tissue, and striae. Stressors pertaining to maternal role attainment include: concerns about infant crying, health, development, bathing, clothing, handling, diapering, night-time feeding, breastfeeding, conflicting expert advice, keeping the baby in an environment with a comfortable temperature, bottle feeding, appearance, safety, elimination, body weight, skin, babys sex, breathing, spitting up, sleeping, and cord care (Moran et al. , 1997; Too, 1997). Finally, those stressors pertaining to social support include: running the household, finances, perception of received emotional support, giving up work, finding time for personal interests and hobbies, fathers role with the baby, relationship with the husband, restriction of social life, relationship with children, and coordinating the demands of husband, housework, and children (Moran et al. , 1997). In addition, Hung and Chung (2001) shows that after childbirth women will encounter another type of stress during the postpartum period, which is characterized by dramatic changes and requires adjustment. Conditions of change, demand, or structural constraint may occur during these dramatic changes, creating many difficulties or concerns. Therefore, in addition to general stress, postpartum stress is induced after delivery during the postpartum period. Postpartum Stress Disorder Postpartum Stress Disorder (PSD) is the most serious, least common, and most highly publicized of the postpartum mood disorders: mothers with PSD have killed their infants and themselves. It is on the extreme end of the postpartum continuum of mood disorders (Nonacs, 2005) and attention to symptoms is vital for any postpartum support program. The treatment issues will not be fully discussed here because of their specialty and complexity. However, it remains a primary function of the service delivery to recognize symptoms and refer appropriately for specialized psychiatric care and management. A sensitive, direct question such as, Some women who have a new baby have thoughts such as wishing the baby were dead or about harming the baby; has this happened to you? (Wisner, et al. , 2003, p. 44), is an essential element of postpartum evaluation and Wisner and colleagues (2003) have suggested that this question be asked of all postpartum women. PSD is a rare, severe disorder with a prevalence of one to two cases per one thousand births (Seyfried Marcus, 2003). Symptoms are abrupt and often occur within 48 hours of delivery but can be delayed as long as two years (Rosenberg, et al, 2003). Typically, however, symptoms occur within the first three weeks, and two thirds appear within the first two weeks postpartum (Chaudron Pies, 2003). Symptoms include mood lability, distractibility, insomnia, abnormal or obsessive thoughts, impairment in functioning, delusions, hallucinations, feelings of guilt, bizarre behavior, feelings of persecution, jealousy, grandiosity, suicidal and homicidal ideation, self-neglect, and cognitive disorganization (Wisner et al. , 2003). Women with PSD who harbor thoughts of harming their infant are more likely to act on those thoughts (Wisner et al. , 2003). Because of the severity of the illness and significant concern for the safety of both the infant and the mother, PSD is considered a psychiatric emergency and hospitalization is necessary. Etiology of PSD There has been some debate about the etiology of PSD. As noted previously, the incidence is approximately one or two women per one thousand births. This rate has remained unchanged for that last 150 years (Wisner et al. , 2003). In cross-cultural studies the rates for PSD are similar to those reported in the United States and the United Kingdom. These findings suggest a primary etiologic relationship between PSD and childbirth, rather than psychosocial factors (Wisner et al. , 2003). OHara (1997) has noted that women are 20 to 30 times more likely to be hospitalized for PSD within thirty days after childbirth than at any other time during the life span, leading him to speculate, with little doubt, that for women there is a specific association between childbirth and PSD. There are subgroups of women who may be more likely to develop stressful symptoms after delivery. Primaparas appear to have a higher risk for c than multiparous women (Wisner et al. , 2003). This may be the result of an undiagnosed bipolar disorder. Women with a history of bipolar disorder or PSD have a 1 in 5 risk of hospitalization following childbirth (Seyfried Marcus, 2003). The overall pattern of symptoms described as PSD suggests the illness is on a continuum of bipolar mood disorders (Wisner et al. , 2003). The clinical presentation of PSD is often very similar to a manic episode (Seyfried Marcus, 2003). Affective disturbances may be depressive, manic, or mixed (Chaudron Pies, 2003). While there is no typical presentation, women often display delusions, hallucinations, and/or disorganized behavior. Delusional behavior often revolves around infants and children, and these women must be carefully assessed because thoughts of harming their children are sometimes acted upon (Chaudron Pies, 2003). The predominant affective symptom in those postpartum women who commit infanticide, filicide, or suicide is depression rather than mania (Chaudron Pies, 2003). In reviewing the connection between bipolarity and PSD several studies have shown evidence for a link in four areas: symptom presentation, diagnostic outcomes, family history, and recurrences in women with bipolar disorder (Chaudron Pies, 2003). The relationship to bipolar disorder is considered quite persuasive and it has been suggested that acute onset PPP be considered bipolar disorder until proven otherwise (Wisner et al. , 2003). However bipolarity does not account for all cases of PSD and a meticulous differential diagnosis is mandatory for those women with presenting stress symptoms. A careful checking of the patients history for previous manic or hypomanic episodes as well as any family history of bipolar disorder is important in order to rule out bipolar disorder. Organic causes contributing to first onset PSD need to be examined and ruled out. These include: tumors, sequelae to head injury, central nervous system infections, cerebral embolism, psychomotor seizures, hepatic disturbance, electrolyte imbalances, diabetic conditions, anoxia, and toxic exposures (Seyfried Marcus, 2003). Of special consideration in postpartum women is thyroiditis. This is relatively common in postpartum women and usually begins with a hyperthyroid phase progressing to hypothyroidism. In either phase PSD can occur (Wisner et al. , 2003). Obtaining serum calcium levels is important to rule out hypercalcemia for patients displaying PSD symptoms (Wisner et al. , 2003). Sleep loss resulting from the interaction of various causes may be a pathway to the development of PSD in susceptible women (Wisner et al. , 2003). The later stages of pregnancy and the early postpartum period are associated with high levels of sleep disturbance. This seems to be more prevalent in primiparous women than in multiparae. Historical and contemporary studies have noted that insomnia and sleep loss are significant and early symptoms of PSD. The rapid and abrupt changes of gonadal steroids after delivery and the evidence that estrogen has an effect on mood and the sleep-wake cycle (Wisner et al. , 2003) suggest an interaction between hormonal fluctuations, sleep loss, and the onset of PSD. Treatment of PSD PSD is a severe illness and should be considered a psychiatric emergency requiring hospitalization (Rosenberg et al. , 2003). The stigma attached to mental illness and especially to mothers who may harm their infants and themselves, often prevents women and their families from seeking help. PSD is often marked with periods of lucidity that can fool those close to the mother and health care professionals. Because of the complexity of the diagnosis and treatment, referral to a psychiatric specialist is required and formal treatment is beyond the scope of this program. However, it will be necessary to recognize symptoms and be cognizant of risk factors, such as history of bipolar disorder or previous PSD. Such awareness is essential, as is the readiness to offer support until adequate services can be implemented (Wisner et al. , 2003). Prevention of PSD is unclear, but early identification of a history of bipolar disorder and/or previous PSD would be an element of a comprehensive postpartum program. Prenatal education describing symptoms is an important aspect of a proactive approach to postpartum care. Part of the prenatal and postpartum educational effort will include urging women to share any bizarre thoughts and fears with their health care professionals and families. New mothers experiencing insomnia will be encouraged to seek assistance from their physicians and to engage other family members to care for the infant during nighttime feedings (Wisner et al. , 2003). As noted earlier, specific treatment is beyond the scope of this program, but a proactive approach to early identification and recognition of unusual thoughts, feelings, and experiences may help to initiate treatment and avoidance of tragic results. Conclusion During the postpartum period, women are immersed in the realities of parenting and coping with balancing their multiple roles (e. g. , wife, mother, and career woman). However, women frequently report difficulty in adjusting to the needs of the baby and other children, difficulty with housework and routines, concerns over support to cope with family needs, and concerns over weight gain and body changes. Accordingly, postpartum stress has an important role in a womans life and influences her health status, both physical and mental.

Saturday, October 26, 2019

Differences in Language and Gender Essay examples -- Papers Research C

Differences in Language and Gender There are many differences in language between male and female. This is why we sometimes do not understand the opposite sex. These differences can be lexical, phonological, grammatical or conversational. There have been many studies into gender and conversational behaviour one of which answers the most common question of who talks the most this was conducted by Fishman '90. He found that in mixed sex conversation, men talk twice as much as women. Although this cannot be generalised to all males and females as many people do not follow the trends. Women are more supportive in their behaviour in conversation. They ask more questions, give more feedback, pay more compliments, start up different topics and they try to bring others into the conversation. On the contrary, men interrupt, express disagreement, ignore other people and don't like to follow other people's new topics. This shows that women are more cooperative and men are competitive in conversation. Zimmerman and...

Thursday, October 24, 2019

Dementia awareness Essay

What is dementia? Dementia is a gradual loss of brain functions. The most common form of dementia is caused by Alzheimer’s disease but there are many other forms of dementia including: alcohol related dememtias,vascular dementia, frontotemporal dementias and Lewy body dementia. Key functions of the brain that are affected by dementia. Each case of dementia is different. The area of the brain affected will depend on the type of dementia. Dementia can affect every area of thinking, feeling, and behaviour. It will eventually also affect the persons physical functions. Why depression, delirium and age related memory impairment may be mistaken for dementia? All the above manifest with similar symptoms. Depression coupled with age related memory impairment looks the same as dementia to the untrained eye. Depression and delirium can be treated with medication. However, once treated, age related memory loss can be assessed. If it is dementia it can not be cured although medication can be used to ease the symptoms. Medical model of dementia Dementia as a clinical syndrome is characterised by global cognitive impairment,which represents a decline from previous level of functioning, and is associated with impairment in functional abilities and, in many cases, behavioural and psychiatric disturbances. The Social Model definition of dementia ‘The loss or limitation of opportunities to take part in the community on an equal level with others because of physical and social barriers’ and refers to being disabled as having an impairment defined as ‘the loss or limitation of physical, mental or sensory function on a long-term or permanent basis’. Why is dementia viewed as a disability? In contrast to a medical model the social model regards dementia as an impairment, where a marked difference can be made to quality of life by the  way people with dementia are supported. Common causes of dementia The main common causes of dementia are age, genetics and medical history. These factors coupled with any possible other medical diseases can cause or accompany dementia, such as: Creutzfeldt-Jakob Disease Dementia with Lewy Bodies Down Syndrome Frontotemporal Dementia Huntington’s Disease Mild Cognitive Impairment Mixed Dementia Normal Pressure Hydrocephalus Posterior Cortical Atrophy Parkinson’s Disease Dementia Traumatic Brain Injury Vascular Dementia Korsakoff Syndrome Dementia risk and possible prevention The main risk factors of dementia are age and genetics, this cannot be changed. However, researchers continue to explore the impact of other risk factors on brain health and prevention of dementia. Some of the most active areas of research in risk reduction and prevention include cardiovascular factors, physical fitness, and diet.

Wednesday, October 23, 2019

Horror Film and Halloween Essay

Halloween is the one time of year when it okay to dress up as anything you want to be and it’s also when you can be celebrating all things horror and dead. Halloween started out as the celebration of the dead but has now grown into a wonderful time of costumes and decoration of scary fictional creatures. Dressing up as a scary character or a character you adore is one of the many perks of Halloween. Going to costume parties with friends and celebrating Halloween together. Watching horror movies and television specials about Halloween is exciting and it feels like more the holiday mood. Halloween is a celebration of the horror genre, dressing up and enjoying this holiday with people. Costumes are very important when it comes to celebrating Halloween. Children enjoy going out on Halloween and trick-or-treating with their costumes to celebrate (Halloween). For an older and mature get together some adults enjoy going to costume parties with their peers. In the olden days it was typical for costumes to be scary but nowadays people tend to dress up as their favourite pop star or a favourite character from a movie. Costumes are enjoyed by many people to be something scary or someone they admire. Going to a party or throwing a party is always expected to done during Halloween. This holiday celebration is enjoyed by everyone in certain countries. Many people attend a Halloween party to enjoy the costume their friends are wearing. The snacks and drinks are shaped and decorated as creepy creatures for the effects of Halloween. Even the music is themed to make the effects to feel realistic. People are attracted to Halloween and going to parties and enjoying it with friends is one of the many perks of this holiday. Enjoying the Halloween and getting into the spirit of this holiday makes it more enjoyable. The Halloween episode of a favourite T.V. shows really adds effects to the enjoyment of the season. This celebration is mostly about the mood of being scared and watching a lot of horror movies. The media has a huge influence on Halloween from movies, television specials and even themed music. Halloween is celebrated widely in certain countries. Halloween is a wonderful and exciting celebration made up by traditions and is enjoyed in certain countries. People enjoy dressing up as a favourite fictional character or someone real like a favourite role model. Going to parties and enjoying your costumes and celebrating this holiday with your friends. Most media like T.V shows will often make special episode for Halloween. People are attracted to Halloween because it is a unique Holiday that celebrates the dead.

Tuesday, October 22, 2019

Consumers Digital Writes essays

Consumers Digital Writes essays Music piracy has caused quite a stir with copyright infringement laws in recent months. A current article I found talks about a recent issue within this topic. The main issue is consumers digital writes. It was published in the San Francisco Chronicle by Benny Evangelista. A group of entrepreneurs have proposed a law to protect consumers and allow them to copy CDs, use a MP3 player, and watch DVD movies on their computer. These three areas are the major pirated areas of digital products. The government is currently trying to crack down on these three areas and make it so it is not possible to do them on any computer. These entrepreneurs feel this is wrong and want to protect the consumers rights. They argue that not all people that own a computer are going to be piraters. They state All consumers are not potential criminals. The entrepreneurs have proposed a Consumer Technology Bill of Rights that says the following things should still be okay for consumer to do : record tv shows to watch later, copy songs from CDs to a portable device, make backup copies of content, and translate content into different formats. The concern that the government has is the rights listed about can be taken advantage of. This article is defiantly coming from one side of the topic. The side it is arguing is the consumer side. This side contains both innocent and guilty parties which includes the people that are taking advantage of digital technology; pirates. The other side is the artists and corporate businesses that want to protect their product. Without this proposal all consumers would be labels as piraters when just a certain percentage actually are. The writer for this article is a reporter for the San Francisco Chronicle. His primary role is to inform the public about current issues. He specifically want to write about issues that impact the people of the San Francisco area. ...

Monday, October 21, 2019

Banduras Social Learning Theory essays

Bandura's Social Learning Theory essays BACKGROUND OF SOCIAL LEARNING AND COGNITIVE THEORY Social learning and imitation was proposed by Miller and Dollard but rejected ideas of behaviorism related by association. It was a theory of learning, however, that did not account for new responses or the processes of delayed and non-reinforced imitations. Bandura widened the not yet developed parts of social learning theory in his book Social Learning and Personality Development written in 1963. It was not until the 1970s, that Bandura discovered there was something missing to the present day learning theories as well as his own social learning theory. The missing link to his theory were self-beliefs. This was identified in his writing Self-efficacy: Toward a unifying theory of behavioral change. Albert Bandura discovered the big debate in dealing with the concept of behaviorism. He felt that it was inadequate for describing complex human functioning and that it is a persons environment that causes behavior. He argued that the cause and effect relationship between environmental forces and behavior outcomes are reciprocal, that peoples environments and their behavior simultaneously create and affect each other. In his publication of Foundations of Thought and Action: A Social Cognitive Theory he stresses that people have certain understandings that allow them to have a certain amount of control over their feelings, actions, and thoughts. Bandura wrote, what people think, believe, and feel affects how they behave. This understandings or beliefs are based on five ideas: symbolizing, self-regulatory, self-reflexive, vicarious and forethought. They are also referred to as his five human competencies. As a result, human behavior is made from a combination of outside influences an d these five ideas. Banduras social learning or cognitive theory is best explained in three categories: observational learning, self-regulation, an...

Sunday, October 20, 2019

Definition and Examples of an Implied Author

Definition and Examples of an Implied Author In reading, an implied author is the version of a writer that a reader constructs based on the text in its entirety. Also called a  model author, an abstract author, or an inferred author. The concept of the implied author was introduced by American literary critic Wayne C. Booth in his book  The Rhetoric of Fiction  (1961): However impersonal [an author] may try to be, his reader will inevitably construct a picture of the official scribe who writes in this manner. Examples and Observations [I]t is a curious fact that we have no terms either for this created second self or our relationship with him. None of our terms for various aspects of the narrator is quite accurate. Persona, mask, and narrator are sometimes used, but they more commonly refer to the speaker in the work who is after all only one of the elements created by the implied author and who may be separated from him by large ironies. Narrator is usually taken to mean the I of the work, but the I is seldom if ever identical with the implied image of the artist.(Wayne Booth, The Rhetoric of Fiction. University of Chicago Press, 1961)Too often in my early work, I suggested a total communion between two utterly confident, secure, correct, and wise human beings at the top of the human heap: the implied author and me. Now I see an implied author who is manifold.(Wayne C. Booth, The Struggle to Tell the Story of the Struggle to Get the Story Told. Narrative, January 1997) Implied Author and Implied Reader A classic example of mismatching in kind is The Jungle, by Upton Sinclair. The implied author intends that the implied reader will react to the horrifying account of the Chicago meatpacking industry by taking socialist action to improve the workers lives. In other words, the implied reader of The Jungle already cares about workers in general, and the implied author intends that building on that old value, the reader will primarily be motivated to adopt a new valuesocialist commitment to helping Chicago meat workers. But, because most actual American readers lacked sufficient concern for workers, a mismatch occurred, and they failed to react as intended; The Jungle ended up moving them only to agitate for improved sanitation in meatpacking.(Ellen Susan Peel, Politics, Persuasion, and Pragmatism: A Rhetoric of Feminist Utopian Fiction. Ohio State University. Press, 2002) Controversies As our study of implied author reception will show, there is no consistent correlation between the contexts in which the concept has been used and the opinions that have been put forward regarding its usefulness. In interpretive contexts, both supporting and opposing voices have made themselves heard; in descriptive contexts, meanwhile, the implied author has met with near-universal hostility, but even here its relevance to textual interpretation occasionally attracts a more positive response.(Tom Kindt and Hans-Harald Mà ¼ller, The Implied Author: Concept and Controversy. Trans. by Alastair Matthews. Walter de Gruyter, 2006)

Saturday, October 19, 2019

Prepare a research paper on one form of soil degradation, its impact,

Prepare a on one form of soil degradation, its impact, methods that are being used to reduce or reverse its impac - Research Paper Example The paper also presents various methods that are being used to reduce or reverse its impact. The relevance of soil erosion to the sustainable use of soil has been discussed. Soil Erosion Soil erosion is a phenomenon that has been taking place for many years. Loose soil on the earth surface is moved by water and wind especially where the ground is bare. As the soil is formed, it is moved away to a different place if it is not covered or held tight by vegetation (Toy et al 2002). Intense human activity has caused soil in the recent past to be moved at a higher rate than its formation. Some activities such as overgrazing and inappropriate farming practices have increased the vulnerability of soil to erosion. Soil that is left bare is carried away by strong wind or rainfall and deposited in rivers and water masses (Cox and Ashley 2000). Rain splash is among the causes of soil erosion whereby very strong rain drops fall on bare soil detaching and moving it for a short distance. The effect s of splash erosion are usually in-situ since the soil is only moved over a minimal distance. Moreover, the rain must fall with significant intensity for erosion to take place. The soil is re-distributed on the surface unless if the area is sloping. Rill erosion may occur when the soil is moved along channels down slope. When the intensity of rainfall is high, the channels may enlarge to form gullies. Gulley erosion is more pronounced in many parts of the world and is associated with mass movement of soil (Bathgate and Pannell 2002). Generally, when the rain falls on soil, a substantial amount of water is absorbed until the soil is saturated. It takes time for the water to infiltrate and therefore the more time the water remains on the soil surface the greater the possibility of absorption. Soil erosion is mainly attributed to overland flow, which is the water that does not infiltrate in to the soil. This occurs mostly when the rainfall is sudden and with high intensity giving littl e time for absorption. Excess runoff is moved down slope by gravity and as rills converge at the bottom of the slope, larger gullies are formed and the overall result is high intensity of erosion and huge soil deposits down slope (Boardman 2006). Wind is also a significant cause of soil erosion especially in semi-arid areas. It redistributes soil and may also move it over a long distance. Soil with detached individual particles through human and animal activities is susceptible to wind erosion. Soil may as well move down slope through tillage (Troeh, 2003). This is usually attributed to wrong methods of plowing, such as contour farming down slope. Apart from moving the soil, tillage creates weakness in soil layers making them susceptible to other forms of erosion. Soil erosion may take place in a gradual and unnoticeable manner eventually causing significant impacts on the soil. In most cases, people tend to control soil erosion once it has occurred rather than putting preventive me asures in place to avoid its occurrence (Abel 2001). Impacts of Soil Erosion Soil erosion is a major environmental problem in the current day since it does not only affect the productivity of land in-situ but also affects the environment ex-situ where the soil is deposited. It has been a significant contributor of flash floods in areas down stream as soil layers accumulate in river channels thereby raising the riverbeds. The result has been mass displacement of populations and damage of crops (Vaclav 2000). On the other hand, soil erosion

Sand and gravel operators in Sault Ste. Marie, Ontario Essay

Sand and gravel operators in Sault Ste. Marie, Ontario - Essay Example The availability of sand and gravel facilitated trade for the operators since this era. In this case, the operators’ trade took a unique course within the locality. There was a vivid observation that the operators would sell most of the products within Sault Ste. Marie vicinity. This trend has prevailed for a long duration since the onset of the sand and gravel trade within the locality. Definitely it was attributable to specific facts that involved city policies and regulations. Ontario had been subject to a rugged terrain since history. The city authorities had the zeal to reform the land and propagate agriculture (Mackintosh, 16). In this case, agriculture required a fine terrain with ideal edaphic factors. Agriculture was to become a complement of the pit business and mining in this region. The interests of the authorities were to enhance both sectors in Ontario. In this plan, southern Ontario was engaged in agricultural activities as the northern part was to retain aggreg ate resources (Mackintosh, 16). This led to the production of a policy that credited Sault Ste. Marie to retain the aggregate resources. This was an ideal decision from the authorities. However, it surfaced intricate issues and hardships amongst inhabitants who practiced the different economic initiatives. Conflict arose in places where the agrarian and aggregate land would coincide. After the implementation of the dual economy, agriculture took a rampant growth. This is evident in the production of fruits like cherries, grapes as well as peaches (Mackintosh, 16). As denoted previously, Sault Ste. Marie was dominant in aggregate resources. During its aggregate activities, Sault Ste. Marie was responsible of the blockage and deposits in Root River. It was also responsible for depositing materials in Cannon Creek. These were paramount resources towards the enhancement of agriculture. Therefore, the aggregate deposits were significantly detrimental to the thriving of agriculture. Conse quently, the authorities charged Sault Ste. Marie due to the blockages it had brought. According to the policy, any individual that would make deposits in rivers or at the banks would be charged. A five or ten thousand dollars fine would be imposed to the offender (Laskin, 10). The crime could also lead to both a fine and one year imprisonment. This policy has been existent from the 1970s to date. This is a key reason towards the dismal trade to Ontario for the aggregate resources. Any form of deposit that would deprive the quality of water in this municipality would attribute to legal charges. Evidently, this policy was a key factor towards the decreased trade of aggregate materials to Ontario. The traders of aggregate materials would prefer not to incur a risk in the transportation process. In this case, they preferred to trade with the local buyers of aggregate materials. Their preference of trade remained intact inspire of the low profit margins in Sault Ste. Marie. Therefore, t he Ontario policy was a key factor towards the local preference by aggregate traders in Sault Ste. Marie. Southern Ontario is entirely vulnerable to aggregate pollution (Laskin, 9). Poverty has been a sensitive issue in the confines of Sault Ste. Marie. There has been a major concern to eradicate poverty in this locality (Coulter, 9). Poverty eradication would bring a new phase in diverse sectors of this vicinity. For example, it would enhance education attainment, healthcare amongst others. The eradication would also bring a positive economic impact on the vicinity. In this locality, poverty is in a rampant state to a large populace. Due to this fact, majority of the individuals do not hold professional qualifications for ideal careers. This is a trend in the sand

Friday, October 18, 2019

Service firm Management Essay Example | Topics and Well Written Essays - 750 words

Service firm Management - Essay Example One, because professional services in business have increasingly become very important. Additionally, businesses require professional service firms in their efforts to attract and retain employees, motivate them, and give the knowledge they require (Rose & Robinson). The main concern of this article is to show how leading professional service firms are managed to overcome their challenges and still emerge profitable. Rose and Robinson affirm that by performing the best does not mean they do not have challenges; it only portrays how they effectively manage their challenges. He tries to list some of the key challenges faced by these firms such as staff satisfaction, client service balancing and partner profitability. Moreover they need to provide insights on things like leverage (partners’ ratio to fee-earners) and analyzing how busy the fee earners are (Rose & Robinson). This article emphasizes the importance of creating a favorable environment for both employees and clients in a company. According to Rose, personal engagement is the most appropriate way to lead a professional service firm. As a manager, one needs to be fully engaged in the company in order to realize good results. In a service company like this, its performance is not evaluated by the amount of products produced but rather the quality of services that it offers. As a manager of a service industry, one is required to be in constant assessment of the customers and employees needs (Rose & Robinson). The manager should device ways of getting feedback from the customers on the services offered. In addition, the manager should also be in close contact with the employees; asking them what they feel about the firm and be ready to incorporate their views to the running of the company. Another factor of good management is staff motivation. Rose & Robinson first highlight the main importance of recruiting competent individuals to the firm and then explain how creating a good environment for them

Death with Dignity Essay Example | Topics and Well Written Essays - 1250 words

Death with Dignity - Essay Example Her decision faced immense opposition from different sects of the society. Indeed, despite its aim to alleviate patient suffering, doctor assisted suicide goes against moral and ethical principles and should therefore not be allowed. Oregon was the first to implement the Death with Dignity Act in 1997. It allows terminally ill patients who are of competent mental mindset and aged over 18 to obtain lethal medication to end their lives (Yuill 61). Such patients would be required to make a written request and two oral ones in a span of 15 days. The prescribing physician should concur with the prognosis or diagnosis that supports death with dignity, and in consultation with another physician agree to offer assisted suicide. Over time, more states have embraced this legislation as a way out for patients with lingering and intolerable pain. Just a few countries in Europe have legalized death with dignity, notably, the Netherlands, Luxembourg, Switzerland and Belgium (Zakaria). Some other parts of the world, including the Ancient Greece and Rome, have been practicing doctor assisted suicide for generations (Loomis 146). However, the issue continues to elicit heated debate from different quarters on its morality and eth icality. Allowing for physician assisted suicide would lead to an inclusion of more people into the eligible groups. According to Yuill, allowing for assisted suicide would mount pressure upon people feeling that they have become a burden to their families and even healthcare providers to include more categories of people in the death with dignity category (32). This could go forth and even become euthanasia or further to involuntary euthanasia. Thus, allowing for death with dignity presents grounds for abuse of the practice, specifically when driven by greed as opposed to love. Those who should inherit from the patient could encourage premature death of

Thursday, October 17, 2019

Analysis of Legalization of Marijuana Article Example | Topics and Well Written Essays - 1250 words

Analysis of Legalization of Marijuana - Article Example If one were to analyze the legislation and initiatives of the past few years it will be evident that public favor to legalize medicinal marijuana has increased and this in return has propelled some states to decriminalize medicinal marijuana. Only a few weeks back, the Joint Mental Health and Substances Abuse Committee of Massachusetts removed criminal penalties for possession of less than one ounce of marijuana by a 6-1 vote ("Massachusetts: Decrim Bill Advances", 2006). The Joint Committee on the Judiciary was scheduled to begin deliberations on it in March. The legislation aimed to decrease penalties for the minor possession i.e. less than an ounce of marijuana to a civil offense instead of a criminal offense as custom and sought to reduce a fine of $500 to $250. This legislation, however, was quashed by the state legislature and medical or otherwise use of marijuana in Massachusetts still remains illegal. In February, this year, Congress, hitherto immovable and unbending on all marijuana-related issues, took a significant step by allowing students, previously charged with marijuana possession, eligibility to apply for student aid. Enacted in the year 1998, this ban - commonly known as "drug offender exclusionary provision" of the Higher Education Act - has refused financial aid to some 175,000 students until now. To some, this has been landmark legislation in the fight for marijuana decriminalization and a tentative admission of the Congress about the futility of penalizing citizens for the possession of a recreational drug ("Congress Scales Back Ban On Student Aid For Drug Offenders," 2006). More evidence of growing support for marijuana legalization came to view in 2005 when the US House of Representatives voted against the lifting of a ban on medicinal marijuana. The most important point, however, was that despite the vote going against marijuana legalization, 161 House members ha d voted in favor of marijuana, which was the record highest. In November 2005, the population of Denver voted to eliminate penalties for the possession of one ounce of cannabis by citizens more than 21 years of age. Fifty-four percent of voters decided in favor of legalization ("Denver Votes To Abolish Pot Penalties," 2005). In 1998, voters in Oregon had voted in favor of a law that allowed patients to possess and grow marijuana for medical reasons. In August 2005, the voters of Oregon further amended the law in favor of marijuana users by allowing marijuana-dependent patients to grow and own 24 ounces of cannabis as opposed to previous 3 ounces ("Legislature Amends Oregon Medical Cannabis Law," 2005).The most important point of objection raised by those who oppose decriminalization of marijuana is the type of message it will send to the citizens of America in general and to the youth and children in particular. They hold that although some medical reports reveal marijuana to be the least harmful of all drugs, yet it cannot be denied that it is a drug and harmful as well. In addition, if marijuana use and possession were to be legalized for medicinal purposes it will only be available through two means.

Observation Report On Special Education Essay Example | Topics and Well Written Essays - 1250 words

Observation Report On Special Education - Essay Example This essay covers on actual status of a special education environment. It extrapolates on observations made during learning hours within a special needs school. In order to ascertain functionality of theoretical concepts in practical settings, we will relate attributes learned in class with those observed in the field. These will include a description of physical elements of the environment, evaluation on teaching strategies employed by teachers, understanding of daily routine within the school and finally a personal reflection based on observations made within the practical context. Description of Classroom Setting The classroom is located at ground floor level together with the mainstream building, with a specially designed exit and entry doors for wheelchairs. Inside the building, I realized that it was a self-contained room with the washroom situated at the back of the room. With a population of thirteen students, one teacher and two aid workers were ready to address any kind of need required by each student. At all times, the two aids Mrs. Francisco and Mrs. Adams arrive in class on time, actually by 11 AM. The teacher, Mr. Molesan, is tasked with delivering class content to the students, who belonged to 5th and 6th grade. On the other hand, Mrs. Francisco and Mrs. Adams assist the challenged students in pushing their wheel chairs or in managing any difficulty related to their disabilities. In order to minimize disruption and inconvenience created by movements, the students are only pulled out of the classroom when necessary, for example during lunch break. Inside the class are seven learning centers each located at strategic positions. With these centers, students can conveniently move from the computers section to the arts and craft center. All the 13 students in the class have various forms of disabilities which include communication impairment, auditory impairment, mild physical challenges and a few with multiple disabilities. Based on the appearance of the classroom setting, I would say that the school has been successful in achieving the underlying objective of delivering special education. According to Smith and Tyler (2010), locating the classroom together with the mainstream building enhances inclusiveness, thus fostering a sense of acceptance among the handicapped students. In addition, locating different learning centers within a single classroom serves as an indication that teachers are committed towards enhancing learning through special approaches. As if those provisions were not enough, timely availability of the two aid workers indicates commitment towards efficiency in assisting learners with special needs. Physical Environment It is undeniable that learners with special needs require a supportive environment that will increase convenience during learning. Based on my observations, such supportive attributes were installed in various parts of the room. Students using wheelchairs were not required to use staircases but enjoyed gently sloping wheelchair case. Additionally, another physical attribute of the class involved vastness of the working space. A class which on normal occasion could accommodate 40 students only had 13 students. According to Boyle (2009), this enables creation of enough space which would allow special maneuvering of physically handicapped members of the class. Students were

Wednesday, October 16, 2019

Analysis of Legalization of Marijuana Article Example | Topics and Well Written Essays - 1250 words

Analysis of Legalization of Marijuana - Article Example If one were to analyze the legislation and initiatives of the past few years it will be evident that public favor to legalize medicinal marijuana has increased and this in return has propelled some states to decriminalize medicinal marijuana. Only a few weeks back, the Joint Mental Health and Substances Abuse Committee of Massachusetts removed criminal penalties for possession of less than one ounce of marijuana by a 6-1 vote ("Massachusetts: Decrim Bill Advances", 2006). The Joint Committee on the Judiciary was scheduled to begin deliberations on it in March. The legislation aimed to decrease penalties for the minor possession i.e. less than an ounce of marijuana to a civil offense instead of a criminal offense as custom and sought to reduce a fine of $500 to $250. This legislation, however, was quashed by the state legislature and medical or otherwise use of marijuana in Massachusetts still remains illegal. In February, this year, Congress, hitherto immovable and unbending on all marijuana-related issues, took a significant step by allowing students, previously charged with marijuana possession, eligibility to apply for student aid. Enacted in the year 1998, this ban - commonly known as "drug offender exclusionary provision" of the Higher Education Act - has refused financial aid to some 175,000 students until now. To some, this has been landmark legislation in the fight for marijuana decriminalization and a tentative admission of the Congress about the futility of penalizing citizens for the possession of a recreational drug ("Congress Scales Back Ban On Student Aid For Drug Offenders," 2006). More evidence of growing support for marijuana legalization came to view in 2005 when the US House of Representatives voted against the lifting of a ban on medicinal marijuana. The most important point, however, was that despite the vote going against marijuana legalization, 161 House members ha d voted in favor of marijuana, which was the record highest. In November 2005, the population of Denver voted to eliminate penalties for the possession of one ounce of cannabis by citizens more than 21 years of age. Fifty-four percent of voters decided in favor of legalization ("Denver Votes To Abolish Pot Penalties," 2005). In 1998, voters in Oregon had voted in favor of a law that allowed patients to possess and grow marijuana for medical reasons. In August 2005, the voters of Oregon further amended the law in favor of marijuana users by allowing marijuana-dependent patients to grow and own 24 ounces of cannabis as opposed to previous 3 ounces ("Legislature Amends Oregon Medical Cannabis Law," 2005).The most important point of objection raised by those who oppose decriminalization of marijuana is the type of message it will send to the citizens of America in general and to the youth and children in particular. They hold that although some medical reports reveal marijuana to be the least harmful of all drugs, yet it cannot be denied that it is a drug and harmful as well. In addition, if marijuana use and possession were to be legalized for medicinal purposes it will only be available through two means.

Tuesday, October 15, 2019

Shamma Al Rathy Essay Example | Topics and Well Written Essays - 750 words

Shamma Al Rathy - Essay Example ill be charged for the architects' services instead of at cost, significantly increase the company's assets and, correspondingly, the shareholders equity; Such "unbilled" receivables may mislead if they include also uncompleted stages of the projects. It would be recommended to reflect the projects at cost. At the end of reporting period ongoing projects should be evaluated, percentage of their completion estimated, and corresponding revenue recognized. 6. Premium Coupons: From consumers' point of view these coupons add value and promote purchase of coffee, so cost of redeemed coupons should apply to the sales revenue of coffee; Given that the company can reasonably estimate from previous experience percentage of the coupons that will be redeemed in the future and that the sale of promotion coffee has already been maid, allowance for the 10% of outstanding coupons should be applied to the 2004 sales revenues for coffee. 7. Travelers Checks: Bank records 1.5% fee as its revenue; American Express records increase in the checks outstanding and unearned revenue. 8. Product Repurchase Agreement: Neither of the manufacturers has revenue in 2004: Manufacturer A should not recognize AED 600,000 as revenue because of possible repurchase of the product in the future, Wholesaler B does not have revenue because compensation for its services will be paid only in July of the following year. 9. Franchises: The initial services (training, introduction to the referral system, and marketing aids) are provided during the year when the agreement is signed. 75% of the receipts come from the annual fees. The company should recognize the initial franchise fee as revenue in the same year the agreement is signed, or allocate it between two or three first years. If the market becomes saturated,...Trees left to grow for one more year are equivalent to work-in-process inventory for manufacturing companies. On-going projects reflected at the rates at which the customers will be charged for the architects' services instead of at cost, significantly increase the company's assets and, correspondingly, the shareholders equity; It would be recommended to reflect the projects at cost. At the end of reporting period ongoing projects should be evaluated, percentage of their completion estimated, and corresponding revenue recognized. Given that the company can reasonably estimate from previous experience percentage of the coupons that will be redeemed in the future and that the sale of promotion coffee has already been maid, allowance for the 10% of outstanding coupons should be applied to the 2004 sales revenues for coffee. The company should recognize the initial franchise fee as revenue in the same year the agreement is signed, or allocate it between two or three first years. If the market becomes saturated, the company's profits are likely to drop 25% in comparison with the previous year and then be kept at the same level.

Monday, October 14, 2019

Human Movement Essay Example for Free

Human Movement Essay Kinesiology, plainly put is the study of human movement and all aspects to it. It is the science of human movement. It is comprehensive in its outlook in that it looks at being part of the physical activity (the experience), class room study of the theories and concepts that make an activity qualify as physical (scholarly) and the professional practice connected with physical practice (Hoffman, 2009). It looks at the muscles-their make up and how they contribute to human movement, the skeleton-make up and contribution to human movement, and the brain in the same context as the previous two. It makes a practitioner of Kinesiology understand human movement from all angles – the why, what, when and which. Method: Since this study entails all aspects of the human movement, so to does its learning. The use of a high school football team was to allow sight of human movement in real time from a passive position. It brought in play the subjectivity of the human mind in the observational data collection aspect of the research. By its very nature football is a contact sport. It thus presented the research with the best tool to observe the different components that constitute human movement. The preconceived notions of the human mind would be make for interesting reading when tested against the scientific results. Since not all the three different components could be scientifically measured at the same time, the research was focused on the head. This is from the realization that head impacts result in concussions. This is an injury that has the very real possibility of ending ones career but also could end up in death. To best investigate the impact to head injuries, the research used the Head Impact Telemetry System (HITS). This is a wireless monitoring system capable to rapidly identify athletes who have sustained an impact to the helmet that has the potential of being injurious. It is made to produce real-time post impact data and transmit the results to a computer not more than 150 yd (137m) from the helmet via radio waves. When out of range, an onboard storage unit would record up to 100 hits and transmit when back on range. HITS allows for objectivity in the research. For Kinesiology, to qualify as a science, measurement must be precise and consistent. Head impact data was captured when a single accelerometer exceeded the preset 15g threshold. Data from 8 milliseconds pre to 32 milliseconds post impact was transmitted and stored. The dependent variables set were linear acceleration, rotational acceleration, jerk force, impulse and duration of impact. Results: From the data collected in the course of the season (68 sessions-55 practice days, 13 games), it became clear that, there were more impacts during games than during practice. The greatest number of knocks was experienced by defensive line players, offensive linemen, offensive skill players and defensive skill players in descending order. In ascending order, the location of helmet with most frequently hits was the top, side, back and front. Game situations resulted in higher linear acceleration than practice impacts. Top of the helmet hits had the greatest linear acceleration followed by front, back and side. Again, game time impacts caused more rotational acceleration than practice. The line players experienced harder hits than skill players in this category. It was also clear that most forceful hits were as a result of front then back then side and lowest top hit. Looking at head jerk, impact force, impact pulse and duration of the impact, the figures were higher during game times than practice. The offensive line and defensive skill players had an equal occurrence of head jerk but higher than the others. The line players had longer duration of impact, more impact impulse and force of impact than the skill players. Maximum head jerk and impact force was experienced as result of hits at the top followed by the front, back and side. For the duration of impact, the order was the same but in reverse role. Also noted, the harder a player was hit the higher the linear acceleration, maximum jerk, force and impulse. The same was true for a soft hit. Conclusion: From the research, this paper has been able to come to some conclusions as a result of the available data. Some of the conclusions justified the subjective view in the researcher at the beginning of the undertaking while some have resulted in the researchers’ change of perception. It was clear from the data that there were higher linear acceleration collisions in high schools than from statistics available about research done on colleges. This could be as a result of the kids wanting to impress. They clearly may not have and full knowledge of the consequences of their actions. In high schools the chance of a college scholarship, means one could have a greater chance of success in life as a result of the extra academic qualifications one acquired. The higher linear acceleration collisions have a higher concussive chance. This makes the high school football player more at risk as very few have compressive medical covers that would give them the kind of specialized treatment necessary should the worst happen. Schools by the nature of their medical cover can not to provide this. In high school, there were more top helmet impacts than in college. It meant a higher linear acceleration and also impact force magnitude. This was a very dangerous location. It exposed the boys to a higher risk of concussion and severe cervical injury. This could easily mean career ending injury, paralysis or even death. There was need for more effort to be put into coaching. They could help teach the boys proper tackling techniques and make them understand the advantage of keeping the head high and thus avoiding helmet contact. It could be that since the college players are more mature, they do not let blood to run into their heads. They are committed in their tackles and at the same time aware of what the consequences of their actions are if not watched properly. Since the boys in high school were still maturing, they were generally smaller in mass and height as compared to their college equivalent. The more reason why they needed to be taught well. In high school, the most dangerous positions were the Quarterback, running backs and wide receiver in decreasing severity. They had the highest linear acceleration impacts. These could be as a result of them been always in full flight (speed) and in open field. These is unlike their counterparts the offensive line and defensive skill players. These may have the highest number of hits on them but the impact is low. The reason could be because they were always near each other and they do not achieve full acceleration before getting hit. The guys who get hit all the time were line men. These guys were involved in every play in the field. Finally, in high school because of the small pool of players, some ended up playing more than one position. This increased the risk of injury. This research did raise some very pertinent issues. They need to have proper coaching in high schools should go along way in reducing the chances of serious injury in the field. Also, the better knowledge of head injury risks should make for better understanding of how to tackle and care for them should and when they occur. This research has given birth to invaluable knowledge for Kinesiologists.

Sunday, October 13, 2019

Evolution Of Speaker Manufacturing English Language Essay

Evolution Of Speaker Manufacturing English Language Essay A speaker is an electrical device that converts electrical signals to mechanical motion in order to create sound waves. A transducer, which is another name for a speaker, is a device that converts one form of energy to another. The speaker moves in accordance with the variations of an electrical signal and causes sound waves to propagate through a medium such as air or water. The first electrical speaker, patented by Alexander Graham Bell in 1876, was for the earpiece of the telephone. This design was later improved upon by Ernst Siemens and Nicola Tesla in 1877 and 1881 respectively. Siemens and Tesla used a metal horn driven by a membrane attached to a stylus to create the design of what would be the basis for the modern speaker. Thomas Edison was working on a design at this time using compressed air as the amplifying mechanism. He quickly found this was not the most effective way to create the mechanical waves that produce sound. He quickly withdrew his application for a patent an d settled on the metal horn design. The metal horn speaker is a speaker which can be found on antique record players. Metal Horn Speaker Moving Coil Speaker The modern design of the moving coil driver was established by Oliver Lodge in 1898. Lodge was a British physicist and writer that was involved in many key patents involving wireless telegraphy. In 1915, Magnavox emerged as the first public company to produce a loudspeaker. This design was the first practiced use of the moving coil drivers in a loudspeaker. Magnavox was started in that same year by Edwin Pridham and Peter L. Jensen. The companys focus was on developing consumer electronics. They would later go on to be the first to develop a phonograph loudspeaker. Today Magnavox is owned by one of the world leaders in electronics, Phillips. In 1924, Chester W. Rice and Edward W. Kellogg received the first patent on the moving-coil principle, direct radiator, and loudspeaker. Their patent was different from the previous attempts because of the adjustment of mechanical parameters in their design. The fundamental resonance of the moving system takes place at a lower frequency than that at which the cones radiation impedance becomes uniform. In 1926, Rice and Kellogg sold the loudspeaker, Radiola which was superior to anything else previously invented because it decreased sound distortion and improved audio quality for the buyer. These speakers used electromagnets instead of large powerful magnets in their design. The electro magnets were used because larger, more powerful magnets were not available at a cheap enough price at the time. In the 1930s, manufactures began placing two or three band passes worth of drivers in their speakers, which allowed for increased quality, sound pressure levels, and frequency response. Many of the components involved in the production of modern speakers have been improved upon from their initial designs. The biggest improvements have occurred mainly in the makeup of the materials in the speaker and in the enclosure design. The diaphragm materials and permanent magnet materials are some of the other speaker components which have improved throughout the years. With the advent of computer aided design and increased accuracy in measuring techniques, the development of the speaker and quality of sound have grown exponentially in recent years. The modern loudspeaker has a similar makeup to that of earlier designs, but some of the basic ideas behind the design have changed to give us the speaker we have today. The Modern Speaker Modern speakers use a permanent magnet and an electromagnet to induce the reciprocating motion of the diaphragm. The alternating current going through the electromagnet constantly reverses the magnetic polarity of the coil thus reversing the forces between the voice coil and the permanent magnet. This causes a rapid back and forth motion of the coil resembling that of a piston. When the coil moves it causes the diaphragm to vibrate the air in front of the speaker, creating sound waves. The frequency and amplitude of the electrical audio signal dictates the rate and distance that the voice coil moves thus determining the frequency and amplitude of the sound waves produced by the diaphragm. Drivers are only able to create sound in a given range of frequencies, thus many different types of drivers must be manufactured to account for the wide range of possible frequencies. The main components of the modern speaker are the diaphragm, permanent magnet, suspension, voice coil, and basket with three other important features being coaxial drivers, speaker enclosures, and audio amplifiers. In the following sections we will break down each component and investigate the improvements of each component including those in the material selection and the manufacturing process. Diaphragm One of the main components of a speaker is the diaphragm, sometimes called a speaker cone. The diaphragm can also be referred to as the diaphragm and its surrounding assembly including the suspension and the basket. However for our purposes the suspension and the basket will be individually discussed in later sections. Movement of the diaphragm causes sound waves to propagate from the speaker thus producing the noise we hear. The ideal properties of a diaphragm are minimal acoustical breakup of the diaphragm, minimal standing wave patterns in the diaphragm, and linearity of the surrounds force-deflection curve. The diaphragm stiffness and damping qualities plus the surrounds linearity and damping play a crucial role in reproducing the voice coil signal waveform. Eighty five percent of the diaphragms sold worldwide are made of cellulose fibers because they can be easily modified by chemical or mechanical means to giving it a practical manufacturing advantage not found in other common diaphragm materials, although reproducibility can be a problem. The lack of reproducibility can affect imaging, depending on the precision and quality of production. Cellulose is also advantageous over other diaphragm materials because of its low cost to produce. Although Cellulose works well as a diaphragm, new synthetic materials are emerging that are more lightweight, allowing for better audio quality, reduced distortion, and increased vibration and shock durability. These materials include polypropylene, polycarbonate, Mylar, silk, fiberglass, carbon-fiber, titanium, aluminum, aluminum-magnesium alloy, and beryllium. Polypropylene is the most common plastic material used in a diaphragm. The polypropylene is normally mixed with a filler, such as Kevlar, to reduce the manufacturing costs or it can be to alter the mechanical properties of the diaphragm. Polypropylene diaphragms have been increasingly more popular with the advancements in modern adhesive technology. Although with all plastic materials present, the material tends to have a viscoelastic creep, which is the materials tendency to slowly deform and stretch when under repetitive stresses. However, polypropylene diaphragms are still a popular choice for high performance speakers due to their consistent performance. Research is presently underway in attempts to create new plastic based materials such as TPX, HD-A, HD-I, Neoflex, and Bextrene for diaphragms. These materials generally have the same characteristics as polypropylene so the manufacturing costs cannot be justified for full production. Another option for low-frequency applications are woven fiber diaphragms. The woven fibers such as carbon fiber, fiberglass, and Kevlar are bonded together with a resin. When the high tensile strength of the woven fibers mixes with the adhesive and bonding characteristics of the resin it results in an incredibly stiff material. This stiffness results in a great low-frequency diaphragm, however the stiffness causes rough high-frequency responses. There have been numerous attempts to improve the high-frequency problems of woven fiber diaphragms such as using two thin layers of Kevlar fabric bonded together with a resin and silica microball combination and another attempt employed a sandwich structure of materials with a honeycomb Nomex core. But again, as with the advanced plastic materials, the cost of manufacturing versus the performance of the material cannot yet be justified. The final modern practical material for diaphragms is metal. Metals worst downfall is its terrible damping attributes which causes extreme high-frequency distortion. The most common metal of choice are aluminum and magnesium alloys. Due to the lack of technological advances in damping agents to add to these alloys, metal diaphragms are very rarely used in high-frequency applications. However, these alloys have been commonly used in lower end frequencies with great success. Permanent Magnet Modern driver magnets have become predominately permanent magnets. Historically this function was filled by the use of electrically powered field coils. When high-strength permanent magnets became available, they eliminated the need for the additional power supply that drove the coils. When this happened, Alnico magnets became popular. Alnico magnets are created from alloying aluminum, nickel, and cobalt. Until about 1980 Alnico magnets were primarily used but because of their tendency to become demagnetized, permanent magnets have since been made of ceramic and ferrite materials. Ferrite magnets are constructed by mixing iron oxide with strontium and then milling the compound into a very fine powder. The powder is then mixed with a ceramic binder and closed in a metal die. The die is then placed in a furnace and sintered to bond the mixture together. Sintering is the process in which the particles of the powder are welded together by applying pressure and heating it to a temperature below its melting point. Although the magnetic strength to weight ratio of ferrite magnets is lower than Alnico, it is considerably less expensive, allowing designers to use larger yet more economical magnets to reach a desired performance. In manufacturing, the most significant technical innovation of the speaker is due to the use of neodymium magnets. Currently neodymium magnets are the strongest permanent magnets known to man. For this reason neodymium magnets significantly help in producing smaller, lighter devices and improve speaker performance due to their great capacity for generating strong magnetic fields in the air-gap. A neodymium magnet is an alloy of neodymium, iron, and boron to form the molecule Nd2Fe14B. The molecular structure of this molecule is a tetragonal crystalline structure. Important properties in a magnet are the strength of the magnetic field, the materials resistance to becoming demagnetized, the density of magnetic energy, and the temperature at which the material loses its magnetism. Neodymium magnets have much higher values for all of these properties than other magnetic materials except that it loses its magnetism at low temperatures. For this reason it is sometimes alloyed with terbium and dysprosium in order to maintain its magnetic properties at higher temperatures. Suspension Another critical element in speakers is the suspension. The purpose of a suspension system is to provide lateral stability and make the speaker components return to a neutral point after moving. A typical suspension system includes two major components, the spider and the surround. The spider connects the voice coil to the frame of the speaker and provides the majority of the restoring force. The surround connects the top of the diaphragm to the frame of the speaker and helps center the diaphragm and voice coil with respect to the frame. Both components work together to make sure the diaphragm and coil assembly move strictly linearly and in line with the center of the permanent magnet. The spider is usually made of a corrugated fabric disk, impregnated with a stiffening resin. The name comes from the shape of early suspensions, which were two concentric rings of Bakelite material, joined by six or eight curved legs. The surround may be resin treated cloth, resin treated non-wovens, polymeric foams, or thermoplastic elastomers that are molded onto the cone body. An ideal surround has sufficient damping to fully absorb vibration transmissions from the cone to surround interface, and the durability to hold out against long term fatigue caused by prolonged vibration. Advancements in suspension manufacturing have come from innovations in synthetic suspension materials. The use of synthetic materials like kevlar or konex instead of cotton, has made todays speakers much more stable than those made as recent as ten years ago. A more durable suspension means that a speakers sound quality can remain unaltered for a longer period of time. This is especially a concern for speakers that generally operate at low frequencies since lower frequency sounds are created by larger diaphragm travel and larger diaphragm travel must be supported by more suspension travel. Voice Coil The wire in a voice coil is usually made of copper, though rarely aluminum and silver may be used. Voice coil wire cross sections can be circular, rectangular, or hexagonal, giving varying amounts of wire volume coverage in the magnetic gap space. The coil is oriented co-axially inside the gap; it moves back and forth within a small circular volume (a hole, slot, or groove) in the magnetic structure. The gap establishes a concentrated magnetic field between the two poles of a permanent magnet, the outside of the gap being one pole, and the center post (called the pole piece) being the other. The pole piece and backplate are often a single piece, called the poleplate or yoke. This magnetic field induces a reaction with the permanent magnet causing the diaphragm to move thus producing the sounds we hear. Voice coils can either be overhung, longer than the magnetic gap, or underhung, shorter than the magnetic gap, depending on its application. Most voice coils are overhung thus preventi ng the coil from being overdriven, a problem that causes the coil to produce significant distortion and removes the heat-sinking benefits of steel causing the speaker to heat rapidly. The most important characteristic of a voice coil is that it be able to withstand large amounts of mechanical stresses and also be able to dissipate heat to its surroundings without causing damage to the speakers other components. In early loudspeakers the voice coil was wound onto paper bobbins to remove heat from the system. At the time this was enough to cool the system at average power levels but as larger amplifiers became available allowing for higher power levels new technologies had to emerge. To cope with the increasing power inputs the use of alloy 1145 aluminum foil was widely used as a substitute for the paper bobbins. Aluminum was popular to industry due to its low cost to manufacture, its structural strength, and it was easy to bond to the voice coil. However, problems with the foil emerged over extended use at increased power levels. The first problem was the foil tended to transfer heat from the voice coil into the adhesives used inside the speaker causing them to thermally degrade or even burn. The second problem was the motion of the aluminum foil inside the magnetic gap created currents that actually increased the temperature of the voice coil, thus causing long-term reliability issues. In 1955 a new material was developed called Kapton, a polyimide plastic film, to replace the aluminum foil. Kapton solved all the problems that were associated with the aluminum foil however Kapton or even its improved cousin Kaneka Apical, were not perfect. Both high-tech materials were costly to manufacture and had a tendency to soften when heated. Although Kapton and Kaneka Apical had their downfalls they became the most widely used coating for voice coils until 1992 when a material called Hisco P450 was developed. Hisco P450 is a thermoset composite created by using a thin film of fiber glass cloth and impregnating it with a polyimide resin. This combination allowed for necessary mechanical strength and endurance of the polyimide and necessary temperature resistance and stiffness of fiberglass. Hisco P450 was able to withstand the grueling temperature requirements of professional speakers while also maintaining enough rigidity to withstand the mechanical stresses associated with long-term, high-frequency motions. In recent years the copper wire that is almost always used as the voice coil has been replaced sparingly with aluminum wire in extra sensitive, high-frequency applications. The aluminum wire is lighter than the copper wire and has about two thirds of the electrical conductivity allowing the wire to move at higher frequencies inside the magnetic gap. Variations of the aluminum wire include copper-clad aluminum and anodized aluminum. Copper-clad aluminum allows for easier winding along with an even more reduced mass. The anodized aluminum is effectively insulated against shorting which removed the concerns of dielectric breakdown. Aluminum wires are great lightweight, low-inductance choices for voice coils however, they do have their downfalls. The thermal characteristics of aluminum causes power limitations with the coil. If too much power is passed through the aluminum coil it can cause the adhesive bonds between the wire and the bobbin, or the bobbin to the spider and coil to weaken or even burn. To cope with the ever increasing power demands on the voice coil in addition to wrapping the coil in some high-tech material to increase its thermal properties, the voice coil has also been submerged in a ferrofluid, an oil that is used to conduct heat away from the voice coil and also creates a small magnetic field thus increasing the power handling capacity of the voice coil. Basket The basket or frame (as seen below) is the fixture used to hold the diaphragm, voice coil, and magnet in the proper place. The rigidity of this part is extremely important to prevent rubbing of the voice coil and prevent random movements that could cause problems with the permanent magnet. The three most common types of modern baskets are cast metal baskets, rigid baskets made out of stamped steel or aluminum, and cast plastic baskets. Each type of basket offers different advantages and disadvantages; these will be discussed in the flowing paragraphs. The stronger the basket the more power the speaker can handle before failure occurs. A well made basket should have a high power rating, be lightweight, and be able to conduct heat away from the voice coil to prevent physical changes or even possible demagnetization of the permanent magnets. Cast metal (above right) baskets are the most rigid of the three in all directions, but they are the most expensive to make. Cast metal baskets are made by melting down the desired metal to liquid form. The scorching hot liquid metal is then poured into a mold and once the liquid metal dries inside the mold, the mold is removed revealing a cast metal basket. Cast metal baskets although more expensive than the other two options, usually are more rigid thus preventing motion. They also have better damping characteristics, and they are also more easily manufactured allowing for more intricate shapes. Cast metal baskets are usually the preferred basket choice for higher quality speakers. A less expensive and yet less rigid basket can be made out of stamped steel. The stamped steel or aluminum sheets arrive to the manufacturer preformed. The sheets are then drilled using a hydraulic press to cut holes in the sheet to allow air flow to and from the diaphragm. The sheet is then pressed using another hydraulic press using a die to form the desired shape. Stamped metal baskets tend to be weaker than their cast metal counterparts. This weakness could cause the basket to flex if the speaker is being used at high volumes. The final option, which is even less expensive, is a cast plastic basket. Cast plastic baskets are made by using the liquid plastic and pouring it into the desired shaped mold. When the liquid plastic dries the mold is removed revealing a cast plastic basket. Just like cast metal baskets, cast plastic baskets are easily manufactured allowing for intricate shapes. The lightweight characteristics of the plastic would also make the speaker lighter allowing for smaller power consumption. However, as with most engineering decisions, the performance of the part proportionally decreases as the cost to produce the part decreases. The decreased cost of production of the plastic basket means that it is a weaker basket. This weaker, plastic basket will allow for the most flexing as compared to cast metal and stamped steel baskets. The power rating of the speaker would also be less than that of the metal baskets, both cast and stamped, due to the weaker strength characteristics of plastic in com parison with metal. Coaxial Drivers Coaxial drivers are the components of a speaker that radiates sound from the same point or axis. This is done by placing a high-frequency driver in the center of a low-frequency driver so that they produce sound waves from a single point in a loudspeaker system rather than separate locations. This allows for a more beneficial design over having the low and high frequency drivers separate. There are many different types of drivers and each driver produces sound within a limited frequency range. Subwoofers, woofers, mid-range drivers, and tweeters are all driver types capable of emitting different ranges of sound. A coaxial driver takes one of these higher frequency drivers and places it within a lower frequency driver. For example, a tweeter, the high frequency unit, could be placed in the center of a woofer, the low frequency unit, so that both drivers emit sound from the same point. This example can be seen in the images below. This design, which improves sound quality, was first de signed by Altec Lansing in the 1940s. Although it has many advantages, it is still an uncommon practice in the manufacturing of speakers due to technical and budgetary considerations. Enclosures The enclosure of a loudspeaker serves three functions and is made with a specific design that helps improve the quality of the sound produced by the speaker. The first function the enclosure performs is separation of the sound waves. It accomplishes this by preventing sound waves generated at the back of the speaker from interacting destructively with sound waves generated at the front of the speaker. The enclosure is intended to reduce distortion created because the waves that emanate from the front of the speaker are out of phase with the waves emanating from the rear of the speaker. If the front and rear waves were to overlap with one another it would result in wave interference. The second function the enclosure serves is to stop any echo and reverberation that would be created from the two differing sound source locations on the speaker. Because waves are created at the front and rear of the speaker, the two different sets of waves travel through the air differently as a result of their relative locations, and arrive at the person listening at different times. The third function the enclosure serves is to deal with the vibrations produced by the driver and to deal with the heat produced by the electronic components. Enclosures did not always have the fully enclosed container design that they now commonly have. Although present day practices say that enclosures need to have a back, before the 1950s they lacked one due to the cooling functions of an open container. Sealed enclosures, the most common type of enclosure, is completely sealed so no air can escape. With this type of enclosure the forward wave travels outward into the surroundings, while the backward wave is limited to only fill the enclosure. With a virtually airtight enclosure, the internal air pressure is constantly changing; when the driver retracts, the pressure increases and when the driver moves out, the pressure decreases. Both movements create pressure differences between the air inside the enclosure and the air outside the enclosure. Because of this, the driver motion always has to fight the pressure differences caused. These enclosures are less efficient than other designs because the amplifier has to boost the electrical signal to overcome the force of air pressure. The force due to air pressure does, however, provide an additional form of driver suspension since it acts like a spring to keep the diaphragm in the neutral position. This makes for tighter, more precise soun d production. Enclosure designs range from very simple, rectangular particle-board boxes (above left) to very complex cabinets made of composite materials (above right). The simplest enclosures are made to prevent destructive interference caused by overlapping of the front and rear sound waves from the speaker. The most complex enclosures contain acoustic insulation and internal baffles, which prevent interference. Solid materials such as heavy wood, are typically used when building enclosures in order to absorb the vibration caused by the speaker driver. This vibration dampening is extremely important. A speakers sound output would be drowned out by the drivers vibrations if there were not an enclosure incorporated into the design. Since the beginning of the production of enclosures, the most advantageous properties required for minimal energy loss through the enclosure walls have remained unchanged. Different strategies employed to reduce energy losses are to use thicker enclosure walls, denser hardwood plys and sturdier bracing. The downside to these methods is that they all add significant weight to the enclosure. However, with the production of newer materials that possess an increased stiffness-to-mass ratio this is changing. These new materials can improve performance and reduce weight, while also reducing the cabinets resonance. The end result is that a greater amount of the speakers en ergy is delivered in the intended direction rather than into mechanical vibrations which are wasted and produce a decrease in sound quality. A recent alternative to heavy wood construction of enclosures is the use of composite materials. It was for the aerospace industry that composite materials such as carbon-fiber were originally developed. Carbon-fiber was a success because of the high demand for a material with increased strength and rigidity. Speaker applications, such as enclosures use carbon-fiber materials to create a product with a vastly decreased weight and increased strength and rigidity. Enclosures built with carbon-fiber can weigh less than half as much as enclosures built from heavy wood. These enclosures which limit the speaker resonance can provide as much as 3 dB more output than the same speaker would have otherwise had in a heavy wood enclosure. Furthermore, carbon-fiber enclosures are extremely durable adding quality to the final product and they require almost no maintenance. Even though carbon-fiber enclosures cost around twice as much to produce as traditional enclosures, the lighter weight and ext ra output offer two very advantageous tradeoffs. Amplifier An amplifier is any device that increases or decreases the amplitude of a signal. An audio amplifier increases low-power audio signals to a suitable level for loudspeakers. When dealing with a speaker there are a many audio amplifiers involved. These amplifiers are responsible for pre-amplification, equalization, tone control, and mixing effects followed by a higher power amplifier which creates the final amplification for suitable levels of sound output. Amplifiers are found in wireless receivers and transmitters, CD players, acoustic pickups, and hi-fi audio equipment. Amplifiers are used for high-quality sound production, and depending upon the quality of the amplifier, they may cause distortion, which the speaker enclosures are meant to deal with. Distortion in amplifiers is caused by difference in phases of the output waveform and the input waveform. The smaller the difference in between the output and input waveforms the greater the quality of final sound. Audio amplifiers cons ist of resistors, capacitors, power sources, wires, semiconductors, and stereo jacks all combined on an electronic work board to produce the type of amplifier needed. Types of Speakers Woofers are loudspeaker drivers designed to produce sounds of low frequency from around 40 hertz up to around 1000 hertz. The most common design for a woofer is the electro-dynamic driver, using a stiff paper cone driven by a voice coil. Woofers are important to allow for a range of frequency that will hit a low level. Effective woofer designs efficiently convert low frequency signals to mechanical vibrations. The vibration of the air out from the cone creates concentric sound waves that travel through the air. If this process can be done effectively, many of the other problems speakers run into will be greatly reduced such as linear excursion. For most speakers the enclosure and the woofer must be designed to work hand in hand. Usually the enclosure is designed around the woofer, but in some rarer cases the enclosure design can actually dictate the woofer design. The enclosure is made to reflect the sounds at the right distance, so that they will not be wave cancelling reflections. Below you can see an example of a common woofer. A subwoofer is a woofer with a diameter between 8 and 21s. Subwoofers are made up of one or more woofers. They can be arranged in many different configurations to produce the best quality of sound. Subwoofers usually play frequencies between 20 hertz and 200 hertz, well within the range of human auditory levels. The first subwoofer was created in the 1960s and added to the home stereo to create bass for sound reinforcement. Up until this point the only form of audio player which contained bass was a phonograph player which was created by Magnavox. This allowed for a more accurate array of music. Subwoofers are used in all sound systems today such as in cinemas, cars, stereos, and for general sound reinforcement. A mid-range speaker is a loudspeaker driver that produces sound between 300 hertz and 5000 hertz. These are less commonly known as squawkers. Midrange drivers can be found as cone speakers, dome speakers, or compression horn drivers. Mid-range speakers usually resemble small woofers. The most common material the cone is made out of for a mid-range is paper although they can be found to be coated or impregnated with polymers or resins to improve vibration dampening. Much of the rest of the mid-range speaker is made from plastic polymers. Mid-range speakers which employ the dome set up usually only use 90 degrees of the sphere as the radiating surface. These can be made from cloth, metal or plastic film. The voice coil in this design is set at the outer edge of the dome. Mid-range drivers are most commonly used for professional concerts and are compression drivers coupled with horn drivers. Rarely mid-range speakers can be found as electrostatic drivers. Mid-range speakers handle the most prominent part of the human-audible sound spectrum. This is the region where most sound emitted by musical instruments lie. This is also where the human voice falls in the audible spectrum. Most television sets and small radios only contain a single mid-range driver. Tweeters are a loudspeaker designed to produce frequencies from 2,000 to 20,000 hertz. Some tweeters on the market today can produce sounds of up to 45000 hertz. The human ear can generally only hear up to about 20000 hertz. The name tweeter comes from the extremely high pitch it can create. Modern tweeters are different from older tweeters because older tweeters were smaller versions of woofers. As tweeter technology has advanced, differen

Saturday, October 12, 2019

To Kill a Mocking Bird Critique :: essays research papers fc

To Kill a Mockingbird is a novel that has received great acclaim, largely due to setting, themes, and accuracy. The setting, themes, and accuracy of the novel seem to fall into place in a great order, which makes this novel receive great acclaim. To Kill a Mockingbird is set in a small town in â€Å"fictional† Maycomb County, Alabama 1933-35. â€Å"It was more of collection of short stories than a true novel†¦yet, there was also life† (Commire, 18). The Characters of To Kill a Mockingbird we also created from people in Lee’s life. For example, she used here father, Frances â€Å"Finch† Lee, as a model for Atticus Finch. â€Å"To Kill a Mockingbird, Is a novel of strong contemporary national significance†¦Miss Lee considers the novel a love story† (Commire, 155). The novel could be considered a love story because it shows the love of a father toward his two children. Apparently, Lee chose the mockingbird to represent the â€Å"purity of heart, and selflessness of characters like Atticus Finch, Tom Robinson, and Boo Radley (Moss and Wilson, 395). To Kill a Mockingbird underscores many themes and represents a universal story from a regional perspective (Stabler). The overall argument involves the obvious plea for justice while mocking the civilization of Southern society. To Kill a Mockingbird is considered a â€Å"classic†, it was a bestseller, and it is required reading for many High School’s in the U.S. (Stabler). Even today in bookstores, like Barnes & Noble, it is easy to find a copy of the book on the shelves. It is even showcased on the bags of Barnes & Noble. It is felt that To Kill a Mockingbird gives â€Å"an accurate reflection† of life in the south during the 1930s (Stabler). There was much racism in the south during the 1930s. Edgar Shuster states, â€Å" In the course of their growing up, the children do a great deal of learning, but little of that learning takes place in school,† (Bernard). It goes to show, that not all life lessons can be learned in school. Shuster also states, â€Å"The achievement of Harper Lee is not that she has written another novel about race prejudice, but rather that she has placed race prejudice in a perspective which allows ups to see it as an aspect of a larger thing (Bernard). Like something that comes from fear and lack of knowledge. Keith Waterhouse believes that â€Å"Miss Lee does well what so many American writers do appallingly: she paints a true and lively picture of life in and an American small town, and she gives freshness to a stock solution† (Kinsman, 481).

Friday, October 11, 2019

Explain why Martin Luther King was considered an Uncle Tom Essay

There are a number of reasons as to why Martin Luther King was and still is referred to as an ‘Uncle Tom’ by some. An Uncle Tom is a black man who behaves in a subservient manner to whites. Malcolm X, among many other blacks, referred to King in this manner. Firstly, many blacks at the time saw King’s non-violence practices as being overly moderate and passive. This is for a number of reasons, mainly that the Negro extremists he criticised dismissed his passion for non violence and was charged as hindering the Negro struggle for equality. Many extremists and those who hoped to go about matters more actively saw King as shying away from the real problem and not confronting matters head-on. He was perceived by many radicals as being ‘all talk, no action’ having brought high the hopes of many young blacks, such as in riot-stricken Ohio, and having done nothing to fulfil the hopes. Moreover, Malcolm X considered King as an Uncle Tom because he was adamant on using non-violence as a political philosophy. Malcolm X sat King’s insistence on using non-violence as a principle, as being suicidal and argued that he was an ‘Uncle Tom’ because non violence only makes sense in a situation under which the person has control over. Malcolm X advocated the idea of self-defence and therefore saw King’s idea of inter-dependence as being as obsequious as Uncle Tom. Lastly, Martin Luther King was considered an Uncle Tom because he had similar methods to that of previous authority figures who were also labelled as Uncle Tom’s. An example is Rosa Parks who used passive methods to get her way and so was called an Uncle Tom. Similarly, King was using a moderate approach and so was given the same label as those who had previously gone about their business similarly. All in all Martin Luther King was considered an Uncle Tom due to the influence of Malcolm X, whose more confrontational methods appealed to black youths who were disappointed with King’s failure to fulfil their hopes. Malcolm X’s influence resulted in many other blacks sharing the ideology that King was an Uncle Tom – this together with the fact that previous icons had been labelled in the same way, led to the growing belief that King was an Uncle Tom.

Thursday, October 10, 2019

Organizations Performance Essay

An organization’s performance is vital for their success and it is important that all employees are on board with making sure the performance is of high quality. It differs from other evaluations within the company because the performance evaluation â€Å"focuses on the organization as the primary unit of analysis† (Evaluating the Performance of an Organization, 2012). Within an organization’s performance it helps determine the actual output or end results of an organization against the intended outputs or goals for the organization (businessdictionary.com). The product market performance is also included in an organizations performance. Some tools can help the organization change or â€Å"improve their policies on behalf of greater preparedness for the many futures ahead† (NYUWagner, 2011). Different areas and tools are used to determine the organization’s performance and how well they either are doing or how much improvement the organization needs to bring their performance up. â€Å"Organizations are constantly trying to adapt, survive, perform and influence† but that does not always mean they are successful at doing what they do (Evaluating the Performance of an Organization, 2012). One way that an organization can better their performance is by conducting an organizational assessment to diagnose their current performance to see what is working and what could use a little bit of improvement. This â€Å"tool can help organizations obtain useful data on their performance, identify important factors that aid or impede their achievement of results, and situate themselves with respect to competitors† (Evaluating the Performance of an Organization, 2012). The main four tools for organizational performance are effectiveness, efficiency, relevance, and financial viability. â€Å"Effectiveness is the capability of producing a desired result† (businessdictionary.com). This means that if something is considered to be effective then it has an intended or expected outcome which is what is used in the organizations to determine if what is set in place is working effectively or if additional changes need to be made. There is sufficient reason and means with effectiveness and that is to accomplish a purpose. Without a purpose there is not an effective meaning behind conducting business or maintaining a successful organization. Effectiveness is a very good tool for organizations and managers to understand and become familiar with to be on the right track for success. Another tool that management should be familiar with and make sure the organization is doing is efficiency. Efficiency is not to be confused with effective even though sometimes they are mixed up and confused. â€Å"Efficiency describes the extent to which time, effort or cost is well used for the intended task or purpose† (businessdictionary.com). Typically efficiency is used with the specific purpose of relaying the capability of a specific application of effort to produce a specific outcome effectively with a minimum amount or quantity of waste, expense, or unnecessary effort (Evaluating the Performance of an Organization, 2012). Relevance is a basic tool but one that can help in the success of an organization. It is â€Å"the ability to retrieve material that satisfies the needs of the user† mostly in the terms of an informational retrieval system (businessdictionary.com). Management need to have relevance in order to be successful because they need to be able to pull the necessary information from production, departments, and other sources to see if what is being produced or utilized is satisfying their requirements or not. For a business to be effective they need to â€Å"strive for the best possible economic results from the resources currently employed or available† (Drucker, 1963). Having an effective business is the key for the business to be able to grow and establish themselves as a company and within the community. Several different techniques are used to allow the business to become effective. One of the keys for successful management â€Å"is to examine the marketplace† and focus on the process of management versus the output (Drucker, 1963). When focusing on the process of management it is important for the company to look at strategy, planning and budgeting and understand the difference between each of them yet understand how each one works with each other to make it successful. â€Å"Strategy is a high level plan to achieve one or more goals under conditions of uncertainty† (businessdictionary.com). For a business to be successful they need to implement some sort of strategy. Strategy is important because it helps to utilize all of the resources that are available or could be available for the project at hand. Most of the time resources are usually limited and in order to achieve the goals that the company has set in place making sure the strategy is set in place will help the process flow more smoothly. â€Å"Strategy is also about attaining and maintaining a position of advantage† over the opponents, or competition that is able to have flexibility instead of having to stick to any specific fixed plan. By allowing there to be slight flexibility that allows the company to try to keep an advantage over the competition and stay ahead compared to the other organizations. Planning â€Å"is the process of thinking about and organizing the activities required to achieve a desired goal† (businessdictionary.com). Strategy could be considered the first step and then planning would be the second step in achieving the desired goal for organizational su ccess. Planning involves the construction and maintenance of a plan. â€Å"This thought process is essential to the creation and refinement of a plan† or combination of it with other plans (NYUWagner, 2011). Planning typically combines forecasting of development with the preparation for how the organization should react to these situations. For the organization to remain successful it needs to understand the importance and relationship between planning and forecasting. â€Å"Forecasting can be described as predicting what the future will look like† or what the future might hold for the company and â€Å"planning predicts what the future should look like† (NYUWagner, 2011). Organizations that do not understand the difference between planning and forecasting will not be as successful as the organizations that do. This is because looking at the numbers of what something might look like and what it should look like are two different ways of planning. For a business to be successful the need to focus on planning so their predictions are what the future should look like to be successful and stay on the right path. Budgets are also incorporated with strategy and planning, they all intertwine together. â€Å"A budget is a quantitative expression of a plan for a defined period of time† (businessdictionary.com). Several different factors can be associated within a budget such as sales volumes and revenues, resource quantities, costs and expense, assets and liabilities, and even cash flows. The budget â€Å"expresses strategic plans of business units, organizations, activities or events in measurable terms† (Evaluating the Performance of an Organization, 2012). For a company to have success in their daily operations they need to make sure the stay on budget and current with all of their projects. Many organizations create a budget for each plan however they do not follow through with the budget. It is one thing to create a budget for a product and it is another thing to actually follow through with the budget and make sure everyone stays on track. If the company goes over budget then the planning and strategy process were not calculated correctly. Everyone involved within the project needs to be familiar with the strategy, plan and budget aspects to keep the organization successful and continue moving forward instead of always having to back track. It is easy to get off of track or to change the plan in the middle of the project. It is up to the organization and the team responsible for the project to keep the budget that they were assigned. Works Cited NYCWagner. (2011). Retrieved August 21, 2013, from http://www.NYCWager.com Evaluating the Performance of an Organization. (2012). Retrieved August 1, 2013, from http://www.smallbusinessnotes.com/managing-your-business/business-ethics.html#ixzz2afud6KU0 Business Dictionary. (n.d.). Retrieved July 30, 2013, from http://www.businessdictionary.com Ferrell. (2011). Business Ethics. Houghton Mifflin Harcourt. Kirby. (2012). Accounting Principles. McGraw. Zain, B. (2011). Strategic Management. Pittsburg: McGraw.