The document discusses null and alternative hypotheses.
The null hypothesis states that there is no relationship or difference between two variables and is what researchers aim to disprove. It is represented by H0 and can be rejected but not accepted.
The alternative hypothesis proposes an alternative theory to the null hypothesis by stating a relationship or difference does exist between variables. It is represented by H1 or Ha.
If the null hypothesis is rejected based on a low p-value, the alternative hypothesis is supported, meaning the results are statistically significant. Examples of null and alternative hypotheses are provided.
This document outlines the process of hypothesis testing. It begins with defining key terms like the null hypothesis (H0), alternative hypothesis (H1), significance level, test statistic, critical value, and decision rule. It then explains the steps involved: 1) setting up H0 and H1, 2) choosing a significance level, 3) calculating the test statistic, 4) finding the critical value, and 5) making a decision by comparing the test statistic and critical value. The overall goal of hypothesis testing is to evaluate claims about a population parameter based on a sample's data.
This document provides an overview of analysis of variance (ANOVA). It describes how ANOVA was developed by R.A. Fisher in 1920 to analyze differences between multiple sample means. The document outlines the F-statistic used in ANOVA to compare between-group and within-group variations. It also describes one-way and two-way classifications of ANOVA and provides examples of applications in fields like agriculture, biology, and pharmaceutical research.
Parametric tests make specific assumptions about the population parameter and use distributions to determine test statistics. They apply to interval/ratio variables where the population is completely known. Nonparametric tests do not make assumptions about the population or its distribution and use arbitrary test statistics. They apply to nominal/ordinal variables where the population is unknown. The key differences are in the basis of the test statistic, measurement level, measure of central tendency, population information known, and applicability to variables versus attributes.
1) The t-test is a statistical test used to determine if there are any statistically significant differences between the means of two groups, and was developed by William Gosset under the pseudonym "Student".
2) The t-distribution is used for calculating t-tests when sample sizes are small and/or variances are unknown. It has a mean of zero and variance greater than one.
3) Paired t-tests are used to compare the means of two related groups when samples are paired, while unpaired t-tests are used to compare unrelated groups or independent samples.
The chi-square test is used to compare observed data with expected data. It was developed by Karl Pearson in 1900. The chi-square test calculates the sum of the squares of the differences between the observed and expected frequencies divided by the expected frequency. The chi-square value is then compared to a critical value to determine if there is a significant difference between the observed and expected results. The degrees of freedom, which determine the critical value, are calculated based on the number of rows and columns in a contingency table. The chi-square test can be used to test goodness of fit, independence of attributes, and other hypotheses.
This document discusses parametric statistical tests. It defines parametric tests as those that make assumptions about the population distribution parameters. The key parametric tests covered are: t-tests (paired, unpaired, one sample), ANOVA (one way, two way), Pearson's correlation, and the z-test. Details are provided on the assumptions, calculations, and applications of each test. T-tests are used to compare means, ANOVA compares multiple group means, Pearson's r measures correlation between variables, and the z-test is for large samples when the population standard deviation is known.
There are two types of errors in hypothesis testing:
Type I errors occur when a null hypothesis is true but rejected. This is a false positive. Type I error rate is called alpha.
Type II errors occur when a null hypothesis is false but not rejected. This is a false negative. Type II error rate is called beta.
Reducing one type of error increases the other - more stringent criteria lower Type I errors but raise Type II errors, and vice versa. Both errors cannot be reduced simultaneously.
The document provides an overview of statistical hypothesis testing and various statistical tests used to analyze quantitative and qualitative data. It discusses types of data, key terms like null hypothesis and p-value. It then outlines the steps in hypothesis testing and describes different tests of significance including standard error of difference between proportions, chi-square test, student's t-test, paired t-test, and ANOVA. Examples are provided to demonstrate how to apply these statistical tests to determine if differences observed in sample data are statistically significant.
This document provides an overview of analysis of variance (ANOVA). It introduces ANOVA and its key concepts, including its development by Ronald Fisher. It defines ANOVA and distinguishes between one-way and two-way ANOVA. It outlines the assumptions, techniques, and examples of how to perform one-way and two-way ANOVA. It also discusses the uses, advantages, and limitations of ANOVA for analyzing differences between multiple means and factors.
The document discusses regression analysis and its key concepts. Regression analysis is used to understand the relationship between two or more variables and make predictions. There are two main types: simple linear regression, which involves two variables, and multiple regression, which involves more than two variables. Regression lines show the average relationship between the variables and can be used to predict outcomes. The regression coefficients measure the change in the dependent variable for a unit change in the independent variable. The standard error of the estimate indicates how close the data points are to the regression line.
The document provides information about the Chi-square test, including:
- It is a non-parametric test used to evaluate categorical data using contingency tables. The test statistic follows a Chi-square distribution.
- It can test for independence between variables and goodness of fit to theoretical distributions.
- Key steps involve calculating expected frequencies, taking the difference between observed and expected, and summing the results.
- The test interprets higher Chi-square values as less likelihood the results are due to chance. Modifications like Yates' correction and Fisher's exact test address limitations for small sample sizes.
Brm (one tailed and two tailed hypothesis)Upama Dwivedi
This document discusses one-tailed and two-tailed hypothesis tests. It defines a hypothesis as an assumption made about the probable results of research. The null hypothesis assumes a parameter takes a certain value, while the alternative hypothesis expresses how the parameter may deviate. A one-tailed test examines if a parameter falls on one side of the distribution, while a two-tailed test looks at both sides. Two-tailed tests are more conservative since they require more extreme test statistics to reject the null hypothesis. Examples are provided to illustrate the difference between one-tailed and two-tailed tests.
Today’s overwhelming number of techniques applicable to data analysis makes it extremely difficult to define the most beneficial approach while considering all the significant variables.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
Sir Ronald Fisher pioneered the development of ANOVA for analyzing results of agricultural experiments.1 Today, ANOVA is included in almost every statistical package, which makes it accessible to investigators in all experimental sciences. It is easy to input a data set and run a simple ANOVA, but it is challenging to choose the appropriate ANOVA for different experimental designs, to examine whether data adhere to the modeling assumptions, and to interpret the results correctly. The purpose of this report, together with the next 2 articles in the Statistical Primer for Cardiovascular Research series, is to enhance understanding of ANVOA and to promote its successful use in experimental cardiovascular research. My colleagues and I attempt to accomplish those goals through examples and explanation, while keeping within reason the burden of notation, technical jargon, and mathematical equations.
The document discusses different types of t-tests, including the one sample t-test, independent samples t-test, and paired t-test. It explains the assumptions and equations for each test and provides examples of their applications. The key differences between the t-test and z-test are also outlined. Specifically, t-tests are used for small sample sizes when the population variance is unknown, while z-tests are for large samples when the variance is known.
In Hypothesis testing parametric test is very important. in this ppt you can understand all types of parametric test with assumptions which covers Types of parametric, Z-test, T-test, ANOVA, F-test, Chi-Square test, Meaning of parametric, Fisher, one-sample z-test, Two-sample z-test, Analysis of Variance, two-way ANOVA.
Subscribe to Vision Academy for Video assistance
https://www.youtube.com/channel/UCjzpit_cXjdnzER_165mIiw
Correlation and regression analysis are statistical tools used to analyze relationships between variables. Correlation measures the strength and direction of association between two variables on a scale from -1 to 1. Regression analysis uses one variable to predict the value of another variable and draws a best-fit line to represent their relationship. There are always two lines of regression - one showing the regression of x on y and the other showing the regression of y on x. Regression coefficients from these lines indicate the slope and intercept of the lines and can help estimate unknown variable values based on known values.
This document provides information about student's t-test. It defines the t-test as a statistical method used to determine if there is a significant difference between the means of two groups. The t-test compares the means of samples A and B and calculates a t-statistic to determine if the null hypothesis that the means are the same can be rejected. An example is provided to demonstrate how to calculate the t-statistic and compare it to critical values from a t-distribution table to conclude if the difference between the sample means is statistically significant. Both one-tailed and two-tailed tests are discussed as well as restrictions of the t-test such as its assumptions of normal distributions and requirements for certain types of data.
This document provides an overview of statistical inference and hypothesis testing. It discusses key concepts such as the null and alternative hypotheses, type I and type II errors, one-tailed and two-tailed tests, test statistics, p-values, confidence intervals, and parametric vs non-parametric tests. Specific statistical tests covered include the t-test, z-test, ANOVA, chi-square test, and correlation analyses. The document also addresses how sample size affects test power and significance.
This document provides an overview of statistical tests of significance used to analyze data and determine whether observed differences could reasonably be due to chance. It defines key terms like population, sample, parameters, statistics, and hypotheses. It then describes several common tests including z-tests, t-tests, F-tests, chi-square tests, and ANOVA. For each test, it outlines the assumptions, calculation steps, and how to interpret the results to evaluate the null hypothesis. The goal of these tests is to determine if an observed difference is statistically significant or could reasonably be expected due to random chance alone.
This document discusses hypothesis testing and the scientific method. It provides details on:
- The key steps of the scientific method including observation, formulation of a question, data collection, hypothesis testing, analysis and conclusion.
- The different types of hypotheses such as simple vs complex, directional vs non-directional, null vs alternative.
- The steps of hypothesis testing including stating the null and alternative hypotheses, using a test statistic, determining the p-value and significance level, and deciding whether to reject or fail to reject the null hypothesis.
- Examples are given to illustrate hypothesis testing and how the p-value is compared to the significance level to determine if the null hypothesis can be rejected.
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). H0 assumes there is no effect or relationship in the population. H1 states there is an effect. A study is conducted and statistics are used to determine if the data supports rejecting H0 in favor of H1. The p-value indicates the probability of obtaining results as extreme as the observed data or more extreme if H0 is true. If p ≤ the predetermined significance level (α = 0.05), H0 is rejected in favor of H1. Otherwise, H0 is retained but not proven true. Type I and II errors can occur when the true hypothesis is incorrectly rejected or retained.
For a detailed explanation Watch the Youtube video:
https://youtu.be/6g4tD162yhI
Hypothesis, Characteristics of a good hypothesis, contribution to research study, Types of hypothesis, Source, level of significance, two-tailed one-tailed test, types of errors
1) A hypothesis is a tentative statement proposed for testing through scientific investigation. It predicts the relationship between two or more variables.
2) Hypotheses guide research design and analysis by specifying the variables to be studied and their expected relationships.
3) The main types of hypotheses are simple, complex, directional, non-directional, null, and alternative. Hypotheses can also be classified as associative, causal, statistical, or research hypotheses.
This document provides an overview of key concepts in statistics, including hypothesis testing, null and alternative hypotheses, regression analysis, correlation, the exponential distribution, types of errors in hypothesis testing, central tendency, Bayes' theorem, Chebyshev's theorem, and simple random sampling. It defines these terms and provides examples to illustrate statistical concepts.
This document discusses hypotheses, including their definition, characteristics of a good hypothesis, and different types of hypotheses. Some key points:
- A hypothesis is a tentative explanation or proposed solution to a problem that can be tested. It predicts the relationship between two or more variables.
- Good hypotheses clearly state the relationship between measurable variables and have implications that allow them to be tested.
- There are different types of hypotheses, including null hypotheses, alternative hypotheses, directional hypotheses, and universal vs. existential hypotheses.
- Characteristics of a good hypothesis include being testable, verifiable, conceptually clear, and related to available techniques. The role of variables should also be clearly indicated.
This document discusses hypotheses, including defining what a hypothesis is, the purpose of hypotheses, different types of hypotheses, and guidelines for formulating hypotheses. Specifically, it states that a hypothesis is a tentative prediction about the relationship between two or more variables, it guides research and allows relationships to be tested, there are null and alternative hypotheses as well as descriptive vs causal hypotheses, and hypotheses should be declarative sentences supported by evidence and establish a logical link to the research problem.
Hypothesis testing involves making tentative assumptions about population parameters or distributions, called null hypotheses (H0). Alternative hypotheses (Ha) are also defined. Sample data is used to determine if H0 can be rejected. If rejected, the conclusion is that Ha is true. There are two types of errors that can occur - type I errors when a true H0 is rejected, and type II errors when a false H0 is not rejected. The significance level and power aim to control these errors. One-tailed and two-tailed tests look at relationships between variables in different ways.
This document discusses hypotheses, including their characteristics, criteria for construction, testing approaches, and types of errors. It defines a hypothesis as a tentative explanation for behaviors or events that can be scientifically tested. Key points include:
- Hypotheses must be clear, precise, testable and specify the relationship between variables.
- The null hypothesis is what is being tested, while the alternative is what may be accepted if the null is rejected.
- Tests can be two-tailed, testing in both directions, or one-tailed, testing in one specified direction.
- Type I error occurs when a true null hypothesis is rejected, while Type II error is accepting a false null hypothesis.
Hypothesis Testing. Inferential Statistics pt. 2John Labrador
A hypothesis test is a statistical test that is used to determine whether there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. A hypothesis test examines two opposing hypotheses about a population: the null hypothesis and the alternative hypothesis.
This document discusses hypothesis testing and interpretation of data. It provides an example of two lecturers, Sandy and Mandy, who want to test whether providing seminar classes in addition to lectures improves student performance compared to lectures alone. The document outlines the steps in hypothesis testing: 1) Identify the research problem and variables, 2) Specify the null and alternative hypotheses, 3) Choose a significance level, 4) Identify the test statistic, 5) Determine the rejection region, and 6) Select the appropriate statistical test. It defines type 1 and type 2 errors and explains key concepts like the null hypothesis, alternative hypothesis, and significance level.
This document discusses hypothesis testing and related statistical concepts. It defines a hypothesis as a statement that can be tested, and distinguishes between the null hypothesis (H0) and alternative hypothesis (Ha). It explains how to select a significance level, commonly 1%, 5% or 10%, and how this relates to Type I and Type II errors. It also covers one-tailed and two-tailed tests, and when to use z-tests or t-tests to test hypotheses about population means based on sample data. The document provides examples of hypotheses for research topics and how to carry out the statistical tests.
The document discusses different types of hypotheses used in research. It defines a hypothesis as a testable statement about the relationship between two or more variables. The main types discussed are:
- Simple hypotheses involve two variables (independent and dependent).
- Complex hypotheses involve more than two variables.
- Null hypotheses state there is no relationship between variables.
- Alternative hypotheses are concluded when a null hypothesis is rejected.
- Logical hypotheses are proposed explanations with limited evidence that can be tested empirically.
- Empirical hypotheses are tested using observations and experiments.
- Statistical hypotheses can be statistically verified using samples.
- Directional hypotheses state an expected direction of results while non-direction
Hypothesis testing involves specifying a null hypothesis (H0) of no effect or relationship between variables and an alternative hypothesis (H1). A significance level such as 5% is set, and a test statistic and p-value are calculated by comparing groups or examining variable associations. If the p-value is less than or equal to the significance level, the null hypothesis is rejected in favor of the alternative hypothesis, meaning the result is statistically significant. Otherwise, the null hypothesis fails to be rejected, meaning the result is not statistically significant.
This document discusses hypotheses in research. A hypothesis is a testable statement about the relationship between two or more variables. Good hypotheses are specific, empirically testable, and manageable. Researchers develop hypotheses to give direction to their research and focus their study. There are different types of hypotheses, including simple, complex, working, null, and alternative hypotheses. Once a hypothesis is developed, researchers select a research design to collect data, such as descriptive or experimental methods, to test their hypothesis.
This document discusses different types of hypotheses used in research. It defines a hypothesis as a proposed explanation for a phenomenon that can be tested. The main types discussed are simple vs complex hypotheses, logical vs empirical hypotheses, directional vs non-directional hypotheses, associative vs causal hypotheses, and the null hypothesis vs the alternative hypothesis. It also discusses types of errors that can occur when testing hypotheses and concludes by emphasizing that hypotheses are provisional explanations that must be tested and can be replaced if not supported.
The document discusses hypothesis testing in research. It defines a hypothesis as a proposition that can be tested scientifically. The key points are:
- A hypothesis aims to explain a phenomenon and can be tested objectively. Common hypotheses compare two groups or variables.
- Statistical hypothesis testing involves a null hypothesis (H0) and alternative hypothesis (Ha). H0 is the initial assumption being tested, while Ha is what would be accepted if H0 is rejected.
- Type I errors incorrectly reject a true null hypothesis. Type II errors fail to reject a false null hypothesis. Hypothesis tests aim to control the probability of type I errors.
- The significance level is the probability of a type I error,
This document provides an introduction to hypotheses, including definitions, characteristics, purposes, variables, sources, and types of hypotheses. It defines a hypothesis as a tentative statement made to explain certain facts or observations that can be tested. Hypotheses should be clear, specific, testable, limited in scope, and logically consistent. The sources of hypotheses include theories, observations, past experiences, and case studies. The document outlines different types of hypotheses and gives an example of a research hypothesis. It also describes common hypothesis tests like t-tests, z-tests, ANOVA, and chi-square tests and notes that good decisions come from effective research.
THIS POWERPOINT EXPLAINS ABOUT HYPOTHESIS AND ITS TYPES, ROLE OF HYPOTHESIS,TEST OF SIGNIFICANCE AND PROCEDURE FOR TESTING A HYPOTHESIS, TYPE I AND TYPE ii ERROR
Similar to NULL AND ALTERNATIVE HYPOTHESIS.pptx (20)
Overview of Statistical software such as ODK, surveyCTO,and CSPro
2. Software installation(for computer, and tablet or mobile devices)
3. Create a data entry application
4. Create the data dictionary
5. Create the data entry forms
6. Enter data
7. Add Edits to the Data Entry Application
8. CAPI questions and texts
Introduction to Data Science
1.1 What is Data Science, importance of data science,
1.2 Big data and data Science, the current Scenario,
1.3 Industry Perspective Types of Data: Structured vs. Unstructured Data,
1.4 Quantitative vs. Categorical Data,
1.5 Big Data vs. Little Data, Data science process
1.6 Role of Data Scientist
AWS re:Invent 2023 - Deep dive into Amazon Aurora and its innovations DAT408Grant McAlister
With an innovative architecture that decouples compute from storage and advanced features like Global Database and low-latency read replicas, Amazon Aurora reimagines what it means to be a relational database. Aurora is a modern database service offering unparalleled performance and high availability at scale with full open source MySQL and PostgreSQL compatibility. In this session, dive deep into the most exciting new features Aurora offers, including Aurora I/O-Optimized, Aurora zero-ETL integration with Amazon Redshift, and Aurora Serverless v2. Learn how the addition of the pgvector extension allows for the storage of vector embeddings and support of vector similarity searches for generative AI.
Getting Started with Interactive Brokers API and Python.pdfRiya Sen
In the fast-paced world of finance, automation is key to staying ahead of the curve. Traders and investors are increasingly turning to programming languages like Python to streamline their strategies and enhance their decision-making processes. In this blog post, we will delve into the integration of Python with Interactive Brokers, one of the leading brokerage platforms, and explore how this dynamic duo can revolutionize your trading experience.
Towards an Analysis-Ready, Cloud-Optimised service for FAIR fusion dataSamuel Jackson
We present our work to improve data accessibility and performance for data-intensive tasks within the fusion research community. Our primary goal is to develop services that facilitate efficient access for data-intensive applications while ensuring compliance with FAIR principles [1], as well as adoption of interoperable tools, methods and standards.
The major outcome of our work is the successful creation and deployment of a data service for the MAST (Mega Ampere Spherical Tokamak) experiment [2], leading to substantial enhancements in data discoverability, accessibility, and overall data retrieval performance, particularly in scenarios involving large-scale data access. Our work follows the principles of Analysis-Ready, Cloud Optimised (ARCO) data [3] by using cloud optimised data formats for fusion data.
Our system consists of a query-able metadata catalogue, complemented with an object storage system for publicly serving data from the MAST experiment. We will show how our solution integrates with the Pandata stack [4] to enable data analysis and processing at scales that would have previously been intractable, paving the way for data-intensive workflows running routinely with minimal pre-processing on the part of the researcher. By using a cloud-optimised file format such as zarr [5] we can enable interactive data analysis and visualisation while avoiding large data transfers. Our solution integrates with common python data analysis libraries for large, complex scientific data such as xarray [6] for complex data structures and dask [7] for parallel computation and lazily working with larger that memory datasets.
The incorporation of these technologies is vital for advancing simulation, design, and enabling emerging technologies like machine learning and foundation models, all of which rely on efficient access to extensive repositories of high-quality data. Relying on the FAIR guiding principles for data stewardship not only enhances data findability, accessibility, and reusability, but also fosters international cooperation on the interoperability of data and tools, driving fusion research into new realms and ensuring its relevance in an era characterised by advanced technologies in data science.
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016) https://doi.org/10.1038/sdata.2016.18
[2] M Cox, The Mega Amp Spherical Tokamak, Fusion Engineering and Design, Volume 46, Issues 2–4, 1999, Pages 397-404, ISSN 0920-3796, https://doi.org/10.1016/S0920-3796(99)00031-9
[3] Stern, Charles, et al. "Pangeo forge: crowdsourcing analysis-ready, cloud optimized data production." Frontiers in Climate 3 (2022): 782909.
[4] Bednar, James A., and Martin Durant. "The Pandata Scalable Open-Source Analysis Stack." (2023).
[5] Alistair Miles (2024) ‘zarr-developers/zarr-python: v2.17.1’. Zenodo. doi: 10.5281/zenodo.10790679
[6] Hoyer, S. & Hamman, J., (20
buy Heriot-Watt University diploma in the UKGlethDanold
Website: https://www.fakediplomamaker.shop/
Email: diplomaorder2003@gmail.com
Telegram: @fakeidiploma
skype: diplomaorder2003@gmail.com
wechat: jasonwilliam2003
buy Heriot-Watt University diploma in the UK, buy Heriot-Watt University diploma in the UK. buy Heriot-Watt University diploma in the UK.
3. NULL HYPOTHESIS-
The null hypothesis is a general statement that states that there is no
relationship between two phenomenons under consideration or that
there is no association between two groups.
A hypothesis, in general, is an assumption that is yet to be proved
with sufficient pieces of evidence. A null hypothesis thus is the
hypothesis a researcher is trying to disprove.
A null hypothesis is a hypothesis capable of being objectively
verified, tested, and even rejected.
If a study is to compare method A with method B about their
relationship, and if the study is preceded on the assumption that both
methods are equally good, then this assumption is termed as the null
hypothesis.
The null hypothesis should always be a specific hypothesis, i.e., it
should not state about or approximately a certain value.
The symbol for the null hypothesis is H0, and it is read as H-null, H-
zero, or H-naught.
The null hypothesis is usually associated with just ‘equals to’ sign as
a null hypothesis can either be accepted or rejected.
4. PURPOSE OF NULL HYPOTHESIS-
The main purpose of a null hypothesis is to verify/
disprove the proposed statistical assumptions.
Some scientific null hypothesis help to advance a
theory.
The null hypothesis is also used to verify the
consistent results of multiple experiments. For e.g.,
the null hypothesis stating that there is no relation
between some medication and age of the patients
supports the general effectiveness conclusion, and
allows recommendations.
5. ACCEPTANCE / REJECTION -
When the p-value of the data is less than the significant level of the
test, the null hypothesis is rejected, indicating the test results are
significant.
However, if the p-value is higher than the significant value, the null
hypothesis is not rejected, and the results are considered not
significant.
The level of significance is an important concept while hypothesis
testing as it determines the percentage risk of rejecting the null
hypothesis when H0 might happen to be true.
In other words, if we take the level of significance at 5%, it means
that the researcher is willing to take as much as a 5 percent risk of
rejecting the null hypothesis when it (H0) happens to be true.
The null hypothesis cannot be accepted because the lack of
evidence only means that the relationship is not proven. It doesn’t
prove that something doesn’t exist, but it just means that there are
not enough shreds of evidence and the study might have missed it.
6. EXAMPLES-
The following are some examples of null hypothesis:
If the hypothesis is that “the consumption of a particular
medicine reduces the chances of heart arrest”, the null
hypothesis will be “the consumption of the medicine
doesn’t reduce the chances of heart arrest.”
If the hypothesis is that, “If random test scores are
collected from men and women, does the score of one
group differ from the other?” a possible null hypothesis
will be that the mean test score of men is the same as
that of the women.
H0: µ1= µ2
H0= null hypothesis
µ1= mean score of men
µ2= mean score of women
7. ALTERNATIVE HYPOTHESIS
An alternative hypothesis is a statement that describes that there is a
relationship between two selected variables in a study.
An alternative hypothesis is usually used to state that a new theory is
preferable to the old one (null hypothesis).
This hypothesis can be simply termed as an alternative to the null
hypothesis.
The alternative hypothesis is the hypothesis that is to be proved that
indicates that the results of a study are significant and that the sample
observation is not results just from chance but from some non-random
cause.
If a study is to compare method A with method B about their relationship
and we assume that the method A is superior or the method B is inferior,
then such a statement is termed as an alternative hypothesis.
Alternative hypotheses should be clearly stated, considering the nature
of the research problem.
The symbol of the alternative hypothesis is either H1 or Ha while using
less than, greater than or not equal signs.
8. PURPOSE OF ALTERNATIVE HYPOTHESIS-
An alternative hypothesis provides the researchers with some
specific restatements and clarifications of the research
problem.
An alternative hypothesis provides a direction to the study,
which then can be utilized by the researcher to obtain the
desired results.
Since the alternative hypothesis is selected before conducting
the study, it allows the test to prove that the study is supported
by evidence, separating it from the researchers’ desires and
values.
An alternative hypothesis provides a chance of discovering
new theories that can disprove an existing one that might not
be supported by evidence.
The alternative hypothesis is important as they prove that a
relationship exists between two variables selected and that
the results of the study conducted are relevant and significant.
9. EXAMPLES
The following are some examples of alternative
hypothesis:
1. If a researcher is assuming that the bearing
capacity of a bridge is more than 10 tons, then the
hypothesis under this study will be:
Null hypothesis H0: µ= 10 tons
Alternative hypothesis Ha: µ>10 tons
2. Under another study that is trying to test whether
there is a significant difference between the
effectiveness of medicine against heart arrest, the
alternative hypothesis will be that there is a
relationship between the medicine and chances of
heart arrest.