14. Sample means are always expected to depart from the “true” mean of a population. Any deviation of a sample mean may be regarded as an error of estimation. The standard error of the sample mean indicated the extent of error of estimation.
15.
16. If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are usually calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter.
17. The width of the confidence interval gives us some idea about how uncertain we are about the unknown parameter (see precision). A very wide interval may indicate that more data should be collected before anything very definite can be said about the parameter.
18. Confidence intervals are more informative than the simple results of hypothesis tests (where we decide "reject H0" or "don't reject H0") since they provide a range of plausible values for the unknown parameter.
19. To determine the confidence interval for population mean, the following formula is used:
25. It is often expressed as a percentage. For example, say , then the confidence level is equal to (1-0.05) = 0.95, i.e. a 95% confidence level.
26.
27. Testing of hypotheses or significance testing begins with the statement of a supposition or hypothesis called NULL HYPOTHESIS denoted by Ho referred to null hypothesis of a type speficify a zero value. At times the null hypothesis is also called a hypothesis of no difference. It the values to be compared are A and B, the null hypothesis would be expressed in symbols as Ho: A=B or Ho: A-B
29. In case the null hypothesis is rejected, alternative hypothesis is accepted. It is denoted by H1 it may also be expressed in a non-directional form H1: A≠B or either of two directional forms H1: A>B or H1: A<B
43. SDp=p(1-p) or pqSDp is the standard deviation of a proportionP is the proportion of subjects who possess the traitQ is the proportion of subject who do not possess the trait, The standard error of a proportion is estimated as follows;SEp= SDpN-1 SEp is the standard errorz=X1n1 -X2n2 p(1-p)1n1 -1n2 Where: S z is the test statistics for large proportionX1n1 ,X2n2 are proportions of two samplesn1, n2are number of cases of the two samples
51. Find the degrees of freedom (df) by using the formula df= N1-N2-3 if two different subjects exposed to different variables and df=N-1 if only one subject is exposed to two variables
52. Choose the level of significance either 0.01 or 0.05 level. Refer to t-distribution table if the computed value (CV) is equal to or greater than TV then
60. Divide Sd by n where n is the number of matched pairs in the two samples.
61.
62. CHI-SQUARE is considered a versatile measure of various uses in both parametric and nonparametric test. A researcher can determine uniformity of responses in a certain questionnaire; uniformity of reading skill levels, etc using the chi-square
63. Use to test association or independence of attributes between scholastic achievement and various levels of socio economic background background; or morale of teachers is associated or independent of salary, sex, leadership behavior of administrator or etc.
64. It is also applied to test for normality of distribution of data collected such as achievement scores various subject or normality of distribution of teacher’s reaction on present evaluation instrument of DepEd.
65. Test for Uniformity of Distribution – use to test for uniformity between observe and hypothesized uniformity of distribution.
67. Determine the number of individuals that fall into each category being observed.
68. Determine the number of individuals that are expected to fall into each category by dividing the total number of individuals by the number of categories
69. For each cell subtract Expected from Observed, square the difference and divide the result by Expected
72. Using the x2 table with degrees of freedom (k-1), where k is the number of categories, with degrees that is, freedom equal to 2, the x2 tabular values are 5.9991 for 5% and 9.210 for 10% level of significance respectively
73. Since the computed value x2 is 4.90, the decision is to accept Ho at both levels of significance and conclude that there is an equal distribution of choices. ( x2=4.90 <5.991) then accept Ho.
74.
75. There are 5 subgroups. The base of each group will be 6/5=1.2 standard deviation in length under the normal curve.
89. Refer to the Table of chi-square and compare the value of chi-square just obtained. Any computed value equal to or greater than the table will reject the hypothesis of no association.
107. The sample problem presented is known as RANDOM EFFECT MODEL called the components of the variance model. It assumes that the treatments are random samples of all of the treatments. It does not look for differences among the group means of the treatments being tested, but rather ask whether there is significant variability among all the possible treatment groups.
108. F-test or Single Factor Analysis of Variance (ANOVA) – it involves one independent variable as basis for classification. This is usually applied in single-group design and complete randomized design (CRD). To test the significance of the difference between means using F-test single factor ANOVA,
109. F-test Two Factor or ANOVA Two-Factor – It involves three or more independent variables as basis for classification. F-test two-factor or ANOVA Two factor is appropriate for parallel-group design. In this design, three or more groups are used at the same time with one variable (control group) is manipulated or changed. The Experimental group varies while the parallel group serves as control group for manipulated purposes (Calmorin and Calmorin,2007). In parallel design, one variable is control group and two variables are experimental group.
110. Kruskal-Wallis One-Way Analysis of Variance by Ranks (H). It is another inferential statistics used to gather independent samples both descriptive and experimental researches.This statistical technique is beneficial to test the difference of K independent samples from different populations whether they differ significantly or not. To apply H-test all the observations for the