Topic 3 DQ 2

Topic 3 DQ 2

Please Respond to the following post with a paragraph, add citations and references.

ORDER A PLAGIARISM FREE PAPER NOW

Each alpha level is dependent on the circumstances that surround a particular study. In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a “false positive” finding), while a type II error is failing to reject a false null hypothesis (also known as a “false negative” finding). More simply stated, a type I error is to falsely infer the existence of something that is not there, while a type II error is to falsely infer the absence of something that is (“Type I and type II errors,” 2018). For example, if doing a test for cancer and that test would determine if a cancerous organ should be removed, then you want to set the alpha level very stringent at 0.01. You do not want to remove an organ if it is cancer free. On the other hand, if you set the alpha level too stringent and you determine that the organ does not have cancer, when in fact it does then the patient may die prematurely because you made a type II error, failure to detect a difference when one in fact does exist. This can make for a delicate balance.

Situations where you might raise the alpha to 0.1 can vary greatly. The obvious one is when the results of the study are not that critical. For example, when trying to see if a mood therapy has an effect, you might raise the alpha to find the effect even though this increases the chance of making a type I error. Certain studies may appear to have biased results because they could have raised their alpha to 0.25 to say their product is better, and the statistics will support that claim, however the chance of making a type I error is 25%! This is why it is important to know statistics to make decisions that are important to you.

Generally, the only reason to raise a type I error rate is if a limited sample size or if no money to conduct the study at the 0.05 level. Choosing between a type I error or a type II error is a tradeoff, the more stringent the type I error the greater the change for a type II error. The question then becomes which is the lesser of two evils. Is it better to say there is an effect when there wasn’t (type I error) or is it better to say there was no effect when there was (type II error)? There can be other factors involved, but basically that is what it comes down to.

References

Type I and type II errors. (2018). In Wikipedia. Retrieved October 8, 2018, from https://en.wikipedia.org/wiki/Type_I_and_type_II_e…