Thursday, October 31, 2019

Human Memory Essay Example | Topics and Well Written Essays - 1000 words

Human Memory - Essay Example e details of a story that had occurred a long time ago, the flow of complicated phrases of long songs, and many other such features, is because of memory. This occurs as a process of information retention in which one’s experiences are archived, and these can be recovered when recalled. Memory and learning are mutually and closely interrelated. Learning is the acquisition of new knowledge and skills, and memory is the retention of this knowledge. Ability to consider the past, think in the present, and predict the future, as well as the use of language are all based on learning and human memory. Memory is also understood as a comprehensive term ranging from memories of childhood and autobiographical memory, to the stream of facts recalled as a result of a trigger. It includes the memory for faces, both familiar and those that require concentration to recognise. â€Å"The memories for taste and smell, sounds and shapes as well as the feel of things are directly related to the senses†, and can trigger off a flood of nostalgia (Samuel, 1999: 49). 1) Information flows through the brain: Sensory iinformation is stored in the sensory store in the cortex. Then some of the information is quickly transferred before it is lost, into the short-term store, and then the rehearsal buffer, and finally into long-term memory storage in the sensory cortex, state Loftus and Loftus (1976). The Papez circuit travels from the hippocampus, around the limbic system and cortex, back to the hippocampus. The strengthened memory paths become a part of long-term memory (Squire, 1991). 2) How neuron networks store and retrieve memories: Neuron networks such as the Papez circuit entrenches temporary connections between visual, hearing and limbic neurons to form a new lasting memory. A network in the cortex that contains a particular sensation forms a path defined by its synapses. This is the firing path for nerve impulses that stores and invokes the particular sensation to evoke a related

Tuesday, October 29, 2019

Article summary Example | Topics and Well Written Essays - 1250 words

Summary - Article Example Researchers are working on those questions and hypotheses because there are only few studies that have demonstrated broad transfer from training to performance on untrained cognitive abilities, training paradigms lacked pedagogical foundation and was difficult to apply in non-laboratory setting or long-term behavioral change, link between music and language is unspecified, and lack of sufficient testing to support the evidences (Moreno, et al., 1-2). In testing the hypotheses, various techniques were used such as: using short, intense series of training sessions measuring intelligence with two subtests from the Wechsler Preschool and Primary Scale of Intelligence–Third Edition; measuring executive function using a go/no-go task that records behavioral performance and event-related potentials (ERPs); and review of previous studies showing increase in amplitude of P2 after music training (Moreno, et al., 2). Children aged 4-6 years old were test in the study. There are 71 childr en recruited but due to drop-out reasons, WPPSI-III data were available for 64 children, 32 (18 girls and 14 boys) who received visual-art training and 32 (20 girls and 12 boys) who received music training (Moreno, et al., 2). In addition, 16 participants were not included in the study because of uneasiness with the procedure and noise in the ERP signal. Therefore, the final sample size is 48 participants, with 24 participants in each training group. Moreno et al. found out that: there was no difference on visual-art training and music training on intelligence measures in the pretest session; significant improvement in intelligence scores marked by improvement only on verbal test were noted only on the music group after training; from pretest to posttest, more than 90% of the children in the music program improved their verbal score; music group outperformed the visual-art group at posttest; group performance has not been mainly affected but revealed significant effect of session on ly in the music group; N2/P3 complex groups have no significant differences but P2 component showed significant differences between groups; after training, the music group showed significantly larger peak amplitudes in the no-go trials whereas the visual-art group did not; and researchers found a significant positive correlation among music group only (4-5). Implications of the study are relevant to the education sector as evidence showed that WPPSI Verbal IQ is highly predictive of academic achievement plan and that there is a strong relationship between IQ evaluated at age 5 and in later life. In addition, computerized tutorials would make it easier for educational environments to implement training (Moreno, et al., 7). The study of Moreno et al. addressed neuroeducation and neurorehabilitation using computerized technologies. Context The findings of the study interest me because of the significant

Sunday, October 27, 2019

Fish Recognition and Classification System Architecture

Fish Recognition and Classification System Architecture 1.1 Introduction In the previous chapter, the architecture and the approaches of object recognition and classification system were shown in details. Moreover, the features of shape characters of fish that will be used for classification stage are provided. Therefore, this chapter focuses on some background of literature approaches on related work and concepts in the field of object recognition and classification. In particular, a main component to design fish recognition and classification system architecture is used; it will show these experiments history of development in several cases. The following literature review is divided into four main sections. The fish recognition and classification, first aspect is covered. The second aspect relates with image segmentation techniques to segment underwater images are presented. Investigates most of the feature extraction and selection by shape representation and description, the third aspect is applied. Finally, the classifiers technique for object recognition and classification in aspect of support vector machine is reported. 1.2 Fish Recognition and Classification Recently, there were many researchers who attempted to design and apply the interaction between an underwater environment and learning techniques to develop the recognition and classification system in order to classify the fish. Therefore, Castignolles et al., (1994) used off-line method detection with static thresholds to segment the images that recorded by S-video tapes and enhance image contrast by using background lighting. Furthermore, to recognize the species a Bayes classifier was tested after extract twelve geometrical features from fish images. However, this method needs control on the light of background, determine the value of threshold and multiple imaging. Moreover, where fish are lined up close to each other, the applications tend to be impractical for the real-time. The moment-invariant for features extraction is fast and very easy to implement. Therefore, Zion et al. (1999) stated the features extraction from dead fish tails by using moment-invariants in order to identify of species. Moreover, the image area is used to estimate fish mass. Furthermore, the accuracy of 99%, 93% and 93%, respectively, for grey mullet, St. Peters fish and carp is got for identification of fish species. Therefore, later Zion et al., (2000) tested this method with live fish swimming in clean water. The accuracies were 100%, 91% and 91%, respectively for fish species identification. However, the features of the tail in the image which were extracted by the moment-invariant are strongly affects by the water opaqueness and fish motion. This method needs clear environments and all features appear clearly. An automatic system to select the desirable features for recognition and classification object is needed. Therefore, Chan et al., (1999) developed a 3D point distribution model (PDM) in order to extract the lateral length measurement automatically from the images without an extensive user interaction to locate individual landmark points on the fish by using an n-tuple classifier as a tool to initiate the model. Moreover, the WISARD architecture is used as a Look-Up Table (LUT) which holds information about the pattern that the classifier tries to recognize, in order to assess the performance and usefulness of the n-tuple classifier in the application of fish recognition. However, this method needs to fix the pre-defined threshold value, amount of prior knowledge for the fish and the bigger training set. Determine the landmarks as tips of snout or fins for fish are very important to recognize the fish. Therefore, Cardin Friedland (1999) stated the morphometric analysis by biometric interpretation of fish homologous landmarks as tips of snout or fins for fish stock discrimination. However, they did not refer to algorithms for determining landmarks and the external points are not satisfactory because their locations are subjective. From other aspect, Cardin (2000) reviewed the landmarks of shape by using morph metric methods for stock identification of fish. Moreover, Winans (1987) used the fins points, extremities point and arbitrarily landmarks in order to identify the fish from those points. Therefore, the attachment of fin membranes were found to be more effective for finfish group discrimination than landmarks located on extremities. Furthermore, Bookstein, (1990) stated the homologous landmarks were found to be more effective in describing shape than other arbitrarily located landmarks. However, these methods should be considered fish sample size, life history, stage of development and the features discriminating power. Fourier descriptor for geometric features description is very famous algorithm. Therefore, Cadieux et al., (2000) stated the Fourier descriptors of silhouette contours, the geometric features described by seven of moment-invariants stated by Hu (1962) are developed in order to count fish by species from fish ways mounted next to river. Therefore, the 78% of accuracy is achieved by using a majority vote of three classification methods. However, this method needs sensors that generate silhouette contours as the fish swim between them and the hardware based on a commercial biomass counter. The manual measurement for the landmarks points is more accurate to identify the object. Therefore, Tillett et al., (2000) proposed the modification of point distribution model (PDM) in order to segmented fish images by means is proposed. Moreover, the edge and its proximity in order to attract landmarks are considered. Furthermore, the average accuracy of 95% by estimating fish length to manual measurement is compared. However, this method required manual placement of the PDM in an initial position close to the centre of the fish, thereby affecting the accuracy of the final fitting. Also, neighboring fish images forced the PDM away from the correct edges and fish whose orientation was very different from the initial PDM or were smaller than the initial values could not be correctly fitted. The combining of more than one classifier is important to get more accuracy to classify the objects. Therefore, Cedieux et al., (2000) proposed intelligent system by combining the result of three classifiers in order to recognize the fish. Therefore, Byes maximum quantification classifier, a learning vector classifier and One-class-One-Network of neural network classifier are used by analysis algorithm of an infrared silhouette sensor to acquire the fish and the majority vote. Moreover, the results depended on at least two from three classifiers should be show the same result. However, this method needs other approach for feature selection in order to improve the recognition performance and to optimize the selection of relevant characteristics for fish classification. Moreover, it needs more computational to identify and classify the object. Detection, representation the features of object and then the classification are the main steps for any recognition and classification system. Therefore, Tidd Wilder (2001) stated a machine vision system to detect and classify fish in an estuary by using a video sync signal to drive and direct the strobe lighting through a fiber bundle into a 30 cmÃÆ'-30 cmÃÆ'-30 cm field of view in a water tank. Moreover, to segment fish images and eliminate partial fish segments, the window-based segmentation algorithm and an aspect ratio are used by means of the segment aspect ratio and a length test. Furthermore, Bayes classifier is used to classify three fish species from extracted fish image area and aspect ratio. However, this method is tested on only 10 images of each of the species, and needs more computation. Moreover, they concluded that the system and method have the potential to operate in situ. The monitoring objects in underwater is difficult problem. Therefore, Rife Rock (2001) proposed Remotely Operated Vehicles (ROV) in order to follow the marine animal in underwater. However, this method needs continuous hours of the pieces movements. Locating the critical points of object is very important to determine the length, weight and the area of the objects. Therefore, Martinez et al., (2003) stated an underwater stereo vision system is used to calculate the weight of fish from their length by using a prior knowledge of the species in order to find points of the fish image and linking them with real-world coordinates. Moreover, in order to find caudal fin points and the tip of the head, the template matching with several templates is used. Furthermore, accuracy of 95% and 96% for estimated fish weight is reported. However, this method needs a prior knowledge of the species, critical points to calculate the length and only used to find the weight. The shape of object is very important feature to recognize and identify the objects. Therefore, Lee et al., (2003) developed automated Fish Recognition and Monitoring (FIRM) system, as shape analysis algorithm in order to locate critical landmark points by using a curvature function analysis. Moreover, the contour is extracted based on these landmark points. Furthermore, from this information species classification, species composition, densities, fish condition, size, and timing of migrations can be estimated. However, this method utilizes high-resolution images and determines the location for the critical points of fish shape. In a conventional n-tuple classifier, the n-tuple is formed by selecting multiple sets of n distinct locations from the pattern space. Therefore, Tillett Lines (2004) proposed an n-tuple binary pattern classifier with the difference between two successive frames in order to locate the initial fish image for detecting the fish head. Moreover, the dead fish hanging in a tank are used to estimate the mean mass. However, the estimation accuracy was low for live fish images due to poorer imaging conditions and larger fish population density. The different features can used together to classify the object. Therefore, Chambah et al., (2004) proposed Automatic Color Equalization (ACE) in order to recognize the fish spaces. Furthermore, the segmentation by using background subtraction was presented. The geometric features, color features, texture features and motion features are used. Then, Bayes classifier is used to classify the selected fishes to one of the learned species. However, this method depends on the color features that need lightness constancy and color constancy to extract visual information from the environment efficaciously. The semi-local invariants recognition is based on the idea that a direct search for visual correspondence is the key to successful recognition. Therefore, Lin et al., (2005) proposed neighbor pattern classifier by semi-local invariants recognition to recognize the fish. Moreover, when they compare it with integral invariants, they found it less mismatching. Furthermore, they compare wavelet-based invariants with summation invariants and found it has more strong immunity to noise. However, this method needs some critical point of the fish shape. The Bayesian filter was originally intended for statistical recognition techniques, and is known to be a very effective approach. Therefore, Erikson et al. (2005) proposed fish tracking by using Bayesian filtering technique. Moreover, this method models fish as an ellipse having eight parameters. However, this method considers only counting the fish without looking into its type. Furthermore, the fish may be having varying in number of the parameters. From other aspect, Lee et al., (2008) stated several shape descriptors, such as Fourier descriptors, polygon approximation and line segments in order to categorize the fish by using contour representation that extracted from their critical landmark points. However, the main difficulty of this method is that landmark points sometimes cannot be located very accurately. Moreover, it needs a high quality image for analysis. Table 1.1: Critical Analysis of Relevant Approaches Author Algorithm Remarks Castignolles et al. 1994 Off-line method This method needs control on the light of background, determine the value of threshold. Moreover, where fish are lined up close to each other, the applications tend to be impractical for the real-time. Zion et al., 1999 Moment-invariants The features of the tail in the image which were extracted by the moment-invariant are strongly affects by the water opaqueness and fish motion. Therefore, this method needs clear environments and all features appear clearly. Chan et al. 1999 PDM this method needs to fix the pre-defined threshold value, amount of prior knowledge for the fish and the bigger training set. Cardin and Friedland 1999 Morphometric analysis They did not refer to algorithms for determining landmarks and the external points are not satisfactory because their locations are subjective. Cardin 2000 Develop Morphometric analysis These methods should be considered fish sample size, life history, stage of development and the features discriminating power. Cadieux et al. 2000 Fourier descriptor This method needs sensors that generate silhouette contours as the fish swim between them and the hardware based on a commercial biomass counter. Tillett et al. 2000 Modify PDM This method required manual placement of the PDM in an initial position close to the centre of the fish, thereby affecting the accuracy of the final fitting. Also, neighboring fish images forced the PDM away from the correct edges and fish whose orientation was very different from the initial PDM or were smaller than the initial values could not be correctly fitted. Cedieux et al. 2000 Intelligent System This method needs other approach for feature selection in order to improve the recognition performance and to optimize the selection of relevant characteristics for fish classification. Moreover, it needs more computational to identify and classify the object. Tidd and Wilder 2001 Machine Vision System This method is tested on only 10 images of each of the species, and needs more computation. Moreover, they concluded that the system and method have the potential to operate in situ. Rife and Rock 2001 ROV This method needs continuous hours of the pieces movements. Martinez et al., 2003 Template Matching This method needs a prior knowledge of the species, critical points to calculate the length and only used to find the weight. Lee et al. 2003 FIRM This method utilizes high-resolution images and determines the location for the critical points of fish shape. Tillett and Lines 2004 n-tuple The estimation accuracy was low for live fish images due to poorer imaging conditions and larger fish population density. Chambah et. al. 2004 ACE This method depends on the color features that need lightness constancy and color constancy to extract visual information from the environment efficaciously. Lin et al., 2005 Neighbor Pattern Classifier This method needs some critical point of the fish shape. Erikson et al. 2005 Bayesian Filtering Technique This method considers only counting the fish without looking into its type. Furthermore, the fish may be having varying in number of the parameters. Lee et al. 2008 Several Shape Descriptors The main difficulty of this method is that landmark points sometimes cannot be located very accurately. Moreover, it needs a high quality image for analysis. 1.3 Image Segmentation Techniques Basically, there are different techniques that would help to solve the image segmentation problems. Therefore, Jeon et al., (2006) categorized these techniques into, thresholding approaches, contour approaches, region approaches, clustering approaches and other optimization approaches using a Bayesian framework, neural networks. Moreover, the clustering techniques can be categorized into two general groups: partitional and hierarchical clustering algorithms. Furthermore, partitional clustering algorithms such as K-means and EM clustering are widely used in many applications such as data mining, compression, image segmentation and machine learning (Coleman Andrews 1979; Carpineto Romano 1996; Jain et al., 1999; Zhang 2002a; Omran et al., 2006). Therefore, this research will focus on the literature review relates with image segmentation techniques to segment fish of underwater images by using k-means algorithm and background subtraction approaches. 1.3.1 K-Means Algorithm for Image Segmentation In general, the standard K-means clustering algorithm is employed in order to cluster a given dataset into k groups. Therefore, the standard K-means algorithm consists of four steps: Initialization, Classification, Centroid computation and Convergence condition. Moreover, several methods attempt to improve the standard K-means algorithm related to several aspects associated to each of the algorithm steps. Furthermore, regarding the computational of the algorithm the steps that need more improvements are initialization and convergence condition (Amir 2007 Joaquà ­n et al., 2007). Therefore, the following sections will be focused on this step in order to represent and address the review for this step. 1.3.1.1 The Initialization Step of K-Means Algorithm Basically, the earliest reference to initialize the K-means algorithm was by Forgy in 1965 that choose points randomly and used as the seeds. Therefore, MacQueen, introduced to determine a set of cluster seeds by using an online learning strategy (MacQueen 1967 Stephen 2007). However, this method can be choosing the point near a cluster centre or outlying point. Moreover, repeating the runs is the increased time taken to obtain a solution. The approach in order to divide the dataset to classes without prior knowledge of classes is required. Therefore, Tou Gonzales (1974) suggested the Simple Cluster Seeking (SCS) method by Calculating the distance between the first instance in the database and the next point in the database, if it is greater than some threshold then select it as the second seed, otherwise move to the next instance in the database and repeat until K seeds are chosen. However, this method depends on the value of threshold, the order of pattern vectors to be processed and repeating the runs is the increased time taken to reach the seeds chosen. For optimal partition of dataset which can achieve better variation equalization than standard. Therefore, Linde et al., (1980) proposed a Binary Splitting (BS) method, based on the first run for K = 1, Then split into two clusters until convergence is reached and the cycle of split and converge is repeated until a fixed number of clusters is reached, or until each cluster contains only one point. However, this method increased the computational complexity by split and the algorithm must be run again. Good initial seeds for clustering algorithm are significant in order to rapidly converge to the global optimal structure. Therefore, Kaufman Rousseeuw (1990) suggested selecting the first seed as the most centrally located instance, then the next seed selected based on the greatest reduction in the distortion and continue until K seeds are chosen. However, this method needs more computation in choosing each seed. In order to select the optimal seed artificial intelligence (AI) is used. Therefore, Babu Murty (1993) and Jain et al., (1996) proposed a method by using genetic algorithms based on the various seed selections as population, and then the fitness of each seed selection is assessed by running the K-means algorithm until convergence and then calculating the Distortion value, in order to select of near optimal seed. However, this method should be run K-means for each solution of each generation. Moreover, a genetic algorithms result depends on the choice of population size, and crossover and mutation probabilities. Enhancement approach in order to improve the clustering quality and overcome computational complexity is required. Therefore, Huang Harris (1993) stated the Direct Search Binary Splitting (DSBS) method, based on Principle Component Analysis (PCA), in order to enhance splitting step in Binary Splitting algorithm. However, this method also required more computational to reach k seeds chosen. Calculating the distance between all points of dataset in order to select the seed is used. Therefore, Katsavounidis et al. (1994) proposed the algorithm as the KKZ algorithm based on preferably one on the edge of the data as the first seed. Then, chosen the second seed based on the point which is furthest from the first seed. Moreover, choosing the furthest point from its nearest seed is repeated until K seeds are chosen. However, this method obvious pitfall from any noise in the data as preferably seed. In order to increase the speed of the algorithm based on divide the whole input domain into subspaces is required. Therefore, Daoud Roberts (1996) proposed approach to divide the whole input domain into two disjoint volumes, and then this subspace is assumed that the points are randomly distributed and that the seeds will be placed on a regular grid. However, this methods refers at the end into randomly choose. The mean of the any dataset is important value in order to estimate the seed depends on it. Therefore, Thiesson et al., (1997) suggested approach to calculate the mean of the entire dataset based on randomly running time of the algorithm to produce the K seeds. However, this method uses the random way to repeat the steps until reach the desirable clusters. In order to find better clustering initialization of k-means algorithm, Forgys method is used. Therefore, Bradley Fayyad (1998) presented a technique that begins by randomly breaking the data into 10, or so, subsets. Then it performs a K-means clustering on each of the ten subsets, all starting at the same set of initial seeds, which are chosen using Forgys method. However, this method needs to determine the size of the subset and used the same initial seed for each subset. A way of reducing the time complexity of initialization for k-means algorithm calculation is to use structures like k-d trees. Therefore, Likas et al., (2003) stated a global K-means algorithm which aims to gradually increase the number of seeds until K is found, by using the kd-tree to create K buckets and use the centroids of each bucket as seeds. However, this method needs to test the results to reach the best number of clusters. The performance of iterative clustering algorithms depends highly on initial cluster centers. Therefore, Mitra et al., (2002) and Khan Ahmad (2004) proposed a Cluster Centre Initialization Method (CCIA) based on the Density-based Multi Scale Data Condensation (DBMSDC) by estimating the density of the dataset at a point, and then sorting the points according to their density and examining each of the attributes individually to extract a list of possible seed locations. The process is repeated until a desired number of points remain. However, this method depends on other approach to reach the desired seeds, which lead to more computation complexity. On the other read, in order to reduce the time complexity of initialization for k-means algorithm calculation is to use structures like k-d trees. Therefore, Stephen Conor (2007) presented a technique for initializing the K-means algorithm based on incorporate kd-trees in order to obtain density estimates of the dataset. And then by using the distance and the density information sequentially to select K seeds. However, this method occasionally failed to provide the lowest value of distortion. Table 1.2: Critical Analysis of Relevant Approaches Author Algorithm Remarks Forgy 1965 and MacQueen 1967 Random initial K-means This method can be choosing the point near a cluster centre or outlying point. Moreover, repeating the runs is the increased time taken to obtain a solution. Tou and Gonzales 1974 SCS This method depends on the value of threshold, the order of pattern vectors to be processed and repeating the runs is the increased time taken to reach the seeds chosen. Linde et al., 1980 BS This method increased the computational complexity by split and the algorithm must be run again. Kaufman and Rousseeuw 1990 Selecting the first seed. This method needs more computation in choosing each seed. Babu and Murty 1993 GA This method should be run K-means for each solution of each generation. Moreover, a genetic algorithms result depends on the choice of population size, and crossover and mutation probabilities. Huang and Harris 1993 DSBS This method also required more computational to reach k seeds chosen. Katsavounidis et al. 1994 KKZ This method obvious pitfall from any noise in the data as preferably seed. Daoud and Roberts 1996 two disjoint volumes This methods refers at the end into randomly choose. Thiesson et al. 1997 the mean of dataset This method uses the random way to repeat the steps until reach the desirable clusters. Bradley and Fayyad 1998 randomly breaking technique This method needs to determine the size of subset and the same initial seed for each subset. Likas et al. 2003 Global K-means This method needs to test the results to reach the best number of clusters. Khan and Ahmad 2004 CCIA This method depends on other approach to reach the desired seeds, which lead to more computation complexity. Stephen and Conor 2007 kd-trees This method occasionally failed to provide the lowest value of distortion. 1.3.2 Background Subtraction for Image Segmentation The basic approach for automatic object detection and segmentation methods is the background subtraction. Moreover, it is a commonly used class of techniques for segmenting out objects of a scene for different applications. Therefore, Wren et al., (1997) proposed running Gaussian Average based on ideally fitting a Gaussian probability density function on the last n pixels values in order to model the background independently at each pixel location. Moreover, to increase the speed the standard deviation is computed. Therefore, the advantage of the running average is given by the low memory requirement instead of the buffer with the last n pixel values are used. However, the empirical weight as a tradeoff between stability and quick update is often chosen. The detection of objects is usually approached by background subtraction based on multi-valued background. Therefore, Stauffer Grimson (1999) proposed the multi-valued background model in order to describe the foreground and the background values. Moreover, the probability of observing a certain pixel value at specific time by means of a mixture of Gaussians is described. However, this method needs assigning the new observed pixel value to the best matching distribution and estimating the updated model parameters. Density estimators can be a valuable component in an application like in the use of object tracking. Therefore, Elgammal et al. (2000) proposed a non-parametric model based on Kernel Density Estimation (KDE) by using the last n background values, in order to model the background distribution. Moreover, the sum of Gaussian kernels centered as one sample data by the most recent n background values as background is given. However, complete model estimation also requires the estimation of summation of Gaussian kernels. The eigen-decomposition methods are computationally demanding by involving the computation of each eigenvector and corresponding eigenvalues. Therefore, Oliver et al., (2000) proposed eigen backgrounds approach based on eigenvalues decomposition by using the whole image instead of blocks of image. Moreover, this method can be improving its efficiency, but depend on the images used for the training set. However, this method not explicitly specified what images should be part of the initial sample, and whether and how such a model should be updated over time. In order to generate and select of a plurality of temporal pixel samples derived from incoming image, the temporal median filter is used. Therefore, Lo Velastin (2001) proposed temporal median filter based on the median value of the last n frames as the background model. Moreover, Cucchiara et al. (2003) developed the temporal median filter by computing the last n frames, sub-sampled frames and the time of the last computed median value in order to compute the median on a special set of values. However, the disadvantage of the temporal median filter approach, the computation by a buffer with the recent pixel values is required. Moreover, the median filter does not provide a deviation measure for adapting the subtraction threshold. The information of the difference frames is accumulated, in order to construct a reliable background image. Therefore, Seki et al., (2003) proposed the background subtraction based on co-occurrence of image variations. Moreover, it works on blocks of N x N pixels treated as an N2 component vector, instead of working at pixel resolution. However, this method offers good accuracy against reasonable time and memory complexity. Furthermore, a certain update rate would be needed to cope with more extended illumination changes. Background modeling of a moving object requires sequential density estimatio

Friday, October 25, 2019

John F. Kennedy Essay -- President Presidency Governmental Essays

John F. Kennedy John F. Kennedy was one of the greatest presidents of the twentieth century. He united almost the entire nation under a common goal; the Moon. His charisma could turn skeptics into believers, and strengthen the bond between himself and his supporters. He had so much charisma because he used many rhetorical devices in his speeches, the same rhetorical devices that have been wooing crowds of people since the time of Rome. One of his most memorable speeches he gave was at Rice University in 1962. In order to rally the support of the space program by the average United States citizen, Kennedy employs rhetorical devices, rhetorical appeals, and argument structure. Kennedy uses many rhetorical devices in his speech. A poignant example of this is when he employs both denotative and connotative language to add emphasis. An example of him using denotative language can be seen in his sentence; â€Å"†¦ F-1 rocket engines each as powerful as all eight engines of the Saturn combined†¦Ã¢â‚¬  (Kennedy, 1962, p. 2). He knows his audience is made up of mostly engineers who would understand what the Saturn and F-1 boosters are, so he does not waste their time explaining the technical aspects of the engines. The audience would probably enjoy this, because it shows that Kennedy thinks highly of their intellect. Kennedy uses connotative language in his statement; â€Å"We have had our failures, but so have others, even if they do not admit them. And they may be less public.† (Kennedy, 1962 p. 3). In this sentence, Kennedy connotes that the Russians are also having problems with their manned space program, even though they are reluctant to expose their failures to the public. Kennedy also uses connotative speech when he says; â€Å"Well space is... ... contrast in order to show the different intentions of the Soviets, and the US. He feels the Soviets want to dominate mankind under the banner of Communism, but he wants to beat them to the Moon so that Democracy wins the race for dominance. He also uses chronological arguments in the beginning of his speech in order to demonstrate the evolution of technology in the US. This demonstrates how fast we are creating new technologies, and how that will effect our race against the Soviets. Kennedy was among the great speakers throughout history. He was no Abraham Lincoln delivering the Gettysburg Address, nor was he Mark Antony giving the eulogy of Julius Caesar, but he did use the same tools of rhetoric developed and masterfully employed by these great men. References Retrieved from world wide web on 2/24/03, from http://www.rice.edu/fondren/woodson/speech.html

Thursday, October 24, 2019

Great Gatsby Point of View Analysis Essay

A narrator, by definition, is how an author chooses to portray information to readers in their work. An author’s choice, in how to tell a story is ideal to the effect it has on readers. In F. Scott Fitzgerald’s timeless classic The Great Gatsby, Nick Carraway tells the entire story as a first-person, peripheral narrator. Fitzgerald purposefully chooses Nick as a partially removed character, with very few emotions and personal opinions. By doing so, readers experience the same ambiguity of other character’s thoughts, are carried smoothly throughout the plot, and Nick’s nonjudgmental character lets readers form opinions of their own. To begin with, because Nick is merely another character in the unfolding tragedy readers can never see into other characters’ minds. Other characters’ thoughts and opinions are completely unknown. Readers are forced to use their imaginations to figure out what characters are thinking. For example, readers are left just as clueless and curious as Nick himself when Gatsby declares: â€Å"I’m going to make a big request of you to-day, so I thought you ought to know something about me. I didn’t want you to think I was just some nobody. You see, I usually find myself among strangers because I drift here and there trying to forget the sad thing that happened to me. You’ll hear about it this afternoon. † (67) This is an effective example of the narrator giving the story depth and suspense because readers are left intrigued by this statement and no hints, given by thoughts of characters, is revealed. Carraway being ignorant to other character’s thoughts is effective in the portrayal of Gatsby’s tale; because half of the intrigue of the story of Gatsby’s downfall is his mysterious manner. If readers were able to understand Daisy’s or Gatsby’s personal thoughts, there would be no suspense in the outcome of the novel. Nick happens to be rather clueless about Daisy, Tom, and Gatsby’s true feelings, which is why he makes such an excellent narrator The fact that Nick is a legitimate character in the story, who is present at all the key events in the novel, helps carry the plot along smoothly and in a timely manner. It also allows readers to better understand how one would feel if placed in these situations. Nick provides an intimate relationship between readers and the setting, because although he rarely provides personal opinions, it is understood that he feels awkward in the majority of the dramatic scenes he is involved in. To continue, all of the action in the book occurs in a few, key scenes, all of which Nick witnesses, it helps Fitzgerald portray action in a straight-forward way; there is no need to go in-depth about emotions, he simply uses dialogue between characters and details about the setting to help readers understand what’s going on, and let them infer how certain characters are feeling. The best example of Nick’s aloof description of a key even is at the Manhattan apartment, when tom hits myrtle, â€Å"Making a short deft movement, Tom Buchanan broke her nose with his open hand. The there were bloody towels upon the bathroom floor, and women’s voices scolding, and high over the confusion a long broken wail of pain. Mr. McKee awoke from his doze and started in a daze toward the door. When he had gone halfway he turned around and stared at the scene—his wife and Catherine scolding and consoling as they stumbled here and there among the crowded furniture with articles of aid, and the despairing figure on the couch, bleeding fluently, and trying to spread a copy of Town Tattle over the tapestry scenes of Versailles. Then Mr. McKee turned and continued on out the door. Taking my hat from the chandelier, I followed. † (37) Clearly, by using Nick as an involved, yet aloof, and purely logical narrator, the author is able to concisely tell the story without confusing or overwhelming readers; and is able to give as much information as necessary while giving readers space for imagination. Besides ignorance to thought, Nick being a practical, peripheral narrator, provides little to no, personal opinion. Although it could be argued that this is a negative quality for a narrator, Fitzgerald made sure he gives nothing away, nor forces any opinions on the readers. He leaves all final opinions in the hands of readers, which makes the novel such an interesting topic because of the variety of interpretations available. Nick never judges any of the characters for their immoral actions and poses as an innocent, reserved bystander. This leaves final judgment open to opinion, which is why The Great Gatsby can appeal to so many different audiences. At the end of the novel, Fitzgerald includes the statement â€Å"one gentleman to whom I telephoned implied that he had got what he deserved† (169) in reference to Gatsby’s death, which leaves readers to choose a side, whether readers should pity Gatsby, or if one has the right to believe that his unlawfulness lead to his own demise. Overall, Fitzgerald obviously put a great amount of thought in choosing Nick Carraway, and innocent, exclusive, yet completely ever-present character as the narrator of the story. Because of Nick’s circumstance and character, the novel is most effective in entertaining readers because the readers are left curious about character’s feelings, are shown the plot in a smooth manner, and are capable of forming individual opinions. In the end, point of view is extremely important in the appeal of a novel and F. Scott Fitzgerald shows his talent by choosing Nick Carraway to tell the traumatic tale of The Great Gatsby.

Wednesday, October 23, 2019

Judaism, Islam, Christianity Essay

Judaism, Islam, and Christianity are all completely different religions from an outsider’s point of view. Yet, when you look at all three of them in depth, a person can find many of the same characteristics. From their origins to their life rituals, there are many differences and similarities between these three popular religions. Between the origins of Judaism, Islam, and Christianity, there is much overlap. Judaism was started through the Patriarch and Matriarch of the faith, Abraham and Sarah. They bore a child together named Isaac, who Jewish people believe to be their ancestor. Jewish people call themselves Children of Israel, signifying their descent from Jacob. Also, Abraham had another son with a different woman. This son, Ishmael, is believed to be the ancestor of Islam. The origin of Christianity was from Jesus Christ, who they believe rose from the dead and is the Son of God. His followers, otherwise known as disciples, spread the religion after his death in 30 CE throughout the Roman Empire. It soon became the official religion in the empire with Emperor Constantine’s decision. It has so far spread worldwide and is the largest religion in the world with almost 2. 2 billion followers. The sacred writings of Judaism, Christianity, and Islam have many similarities. Christianity and Judaism believe in the Old Testament, which in Judaist terms is the Tanakh. This consists of the Torah, the Neviim, and the Ketuvim. It tells of God making a covenant with people. They believe that Jesus is not the Son of God and that their saviour is still to come. Muslims follow the exact writings of the Qu’ran, which they believe their prophet Mohammed was told in a revelation from Allah. They also follow the Hadith and the Sunna, which are, in a way, different variations of Mohammad’s life and stories. They regard parts of the Old Testament and the Gospels as inspired, and believe the Qur’an to be a more final and complete copy. The places of worship between Judaism, Islam, and Christianity are quite different. People of Jewish faith observe the Sabbath and conduct their services in Synagogue or the Temple, Christians worship in churches, chapels, and cathedrals, and Muslims worship in Mosques. People of Jewish faith and Muslims do not allow statues in their worship places, stating that it takes away their attention from God and Allah and that it ruins their monotheistic belief. Roman Catholics do not worship statues or icons. In the Eastern Catholic churches, people viewed icons as a way to greater worship and they prayed to them for protection. In Judaism and Christianity, the Holy Land, being Israel, is considered a very sacred place due to the fact that Jesus was born there and lived there, and also because that was the land promised to Abraham. Rome is also considered a very sacred place to Christians because that is where the leader of their religion lives, otherwise known as the Pope. This is similar to Medina and Mecca in Islam due to the fact that their house of God, the Kaaba, is located there and is believed to be placed right underneath Heaven. The role of women between Judaism, Islam, and Christianity, although men and women are equal in the eyes of God, are similar. Traditional Judaism gives different roles for men and women. For example, Orthodox men and women worship separately. This is in comparison to Muslims, where the Qur’an treats men and women as equals. This is close to Christianity, where everyone is equal under God. This allows women and men to be equal. For example, both genders can attend worship at the same time in the same place. Unfortunately, women are oppressed in today’s Muslim society due to Sharia law, which they believe is the law of Allah. It often discriminates against women and strips them of their rights. For example, a women’s word does not count as much as a man’s. This is similar to Christianity where women can not become ordained priests and are not given equality within in the Church. Also, men and women worship separately in Islam, which shows similarities to Orthodox Judaism. The symbols of Judaism, Christianity, and Islam are very much different. The Star of David is named after King David, who had a shield with a star on it. It has seven spaces, including the separate points and the centre. This number seven is very important within the Jewish faith due to the six days of creation including the seventh day of rest. The menorah, another sacred Jewish symbol, also represents the seven days of creation. It is referred to as the â€Å"tree of life† because it has seven branches. The Mezuzah is also another sacred object. It contains the Shema written on a parchment. The most sacred ritual object in the Jewish faith is the Torah Scroll. It is the centre of Jewish life because it is used to teach, and it has the Five Books of Moses inscribed in it. In comparison to Judaism, the symbols for Christianity are few. They regard bread as Jesus’ body, which they call the Eucharist. They also believe that wine is Jesus’ blood. They drink and eat these at masses in remembrance of the Last Supper and the sacrifice that Jesus gave to them to wash away their sins. They regard the cross as a symbol of the sacrifice as well. Ichthus, the symbol of a fish, is a symbol for Christianity. In Islam, the Tawhid is the concept of monotheism. It holds God as one and unique. The crescent star is widely used as a symbol on Islamic flags. When babies are born in Judaism, Islam, and Christianity, there are many rituals that they attend to. In Judaism, they believe in having the baby circumcised, which they call a Brit milah. Muslims also believe in having their sons circumcised. In Christianity, they believe in baptising the baby by a Priest to rid it of its original sin. In Islam, they believe in whispering the call to prayer in the baby’s right ear, making sure that it is the first sound they hear. Also, there is a naming ceremony where close friends and family gather to decide on the child’s name. Each of these rituals is different, leading to diversity between religions. During a marriage in Judaism, Islam, and Christianity, one must use different rituals to attend to the needs of their religion. In Judaism, the couple stands under a canopy where the Rabbi reads from the Torah. Also, the marriage becomes official when the partners give something of value to each other, such as rings. In Islam, many marriages are arranged and polygamy is allowed. They see marriages as a way to gain political advantage and to tie one family to another. This is not the case with Christianity. When you marry under God in a church, they do not permit divorces unless the circumstances are dire. You exchange rings as a sign of the vow you have given to the other person. Also, you are a couple under God and are expected to baptise your children. When it comes to death in Judaism, Islam, and Christianity, there are different ways to go about it. In Judaism, a shitting shiva takes place, where the family member mourns for a period of seven days. In Islam, the family member is quickly wrapped and buried. They are then pointed towards Mecca, which holds the sacred Kaaba. They also believe that the last words on your lips should be the Shahada. In Christianity, they hold a mass where families and friends can go to mourn as one. If lucky, you are blessed by a Priest, which relieves you of your sins. This is called Anointing of the Sins and Last Rites. The beliefs of Judaism, Islam, and Christianity are quite similar. They each have a different take on past events. Christians, Muslims, and Jewish people believe in monotheism, stating that there is only one divine God. Muslims and Jewish people claim that Christians do not believe in one God, seeing as they think God exists in three different ways; the Father, the Son, and the Holy Spirit. Christians call this the Trinity. In Judaism, they do not believe that Jesus rose from the dead, is the Son of God, or was born from the Virgin Mary. In Christianity, they believe in all of those points. In Islam, they believe that while Jesus was the Son of God and was born from the Virgin Mary, He did not die on the cross but was rather brought into heaven by God. People of Jewish faith think that Jesus was crucified due to this claim of being divine. Choosing to disregard the claim that Jesus is the saviour, they believe that their saviour will come one day and will unite the world and bring peace to humanity. Muslims believe that the Kaaba, a sacred cube located in Mecca, is God’s house and is located directly underneath heaven. They trust that the point to life is to live in a way that pleases Allah to gain a spot in Paradise, which is their heaven in the afterlife. The meaning of life for Christians, though, is to seek divine salvation through the grace of God and to become one with Him. People of Jewish faith believe life should be spent helping humanity and fellow neighbours. Christianity believes that every human has inherited â€Å"original sin† from Adam, meaning that people have a tendency towards evil. This is in comparison to Judaism and Islam who believe people are capable of both good and evil actions. In comparison to Christianity and Judaism, prayer rituals are taken very seriously in Islam. They believe in prayer five times a day: dawn, midday, afternoon, sunset, and evening, which is called the Salat. This is similar to Orthodox Judaism in which they pray in formal worship services three times a day; morning, afternoon, evening. They pray the Shema, which is the most important prayer in Judaism. Before prayer, Muslims wash up to their legs up to their knees and their arms up to their elbows to cleanse themselves. This is a bit similar to Christianity, which uses blessed holy water to pray with before entering mass. This blesses one’s self, recalls the baptism, and forgives sins. Each Islamic prayer is directed towards Mecca where the Kaaba is located, which they believe is loca ted directly under heaven. Women and men pray in parallel lines at separate times, and they pray on rugs to keep themselves clean. Also, there are certain guidelines that women and men need to follow in terms of what to wear to mosque. For example, a woman should not wear clothes that attract attention. In the European Christian Churches there are many dress codes one would need to follow. This is not the case in most Western Churches. The formalities have lessened and one can wear jeans to mass without causing uproar, which is much different from Islam. Judaism, Islam, and Christianity are similar religions when it comes to beliefs. While they have diverse opinions and take place in countries all over the world, these well known religions are revered for their perseverance. All three are valid religions, which, through different takes on past events, have moulded into what they are today. For example, while Christianity and Islam choose to believe that Jesus will come again, Judaism chooses not to. This take on a past event has shaped Christianity and Judaism greatly. Also, Islam has a different view of women’s rights and placement in society in comparison to Judaism and Christianity. I think that while Islam and Christianity are completely opposite when it comes to rituals and strictness, they are very much similar in terms of beliefs. Although Judaism and Islam originated from the same family tree and Judaism and Christianity coincide on many events, such as their origins, I believe that Judaism is the most different of the three due to its views about Jesus. Judaism, Islam, and Christianity are all completely different religions from an outsider’s point of view. Yet, when you look at all three of them in depth, a person can find many of the same characteristics.