Note: Input values must be separated by tabs. Copy and paste from Excel.

Your data needs to have exactly the same header (variable names) in the first row.

For examples of how this data should look click on the Input Examples tab



        
        
Data for this example is from the following study.
Cavanagh, K., Strauss, C., Forder, L., & Jones, F. (2014). Can mindfulness and acceptance be learnt by self-help ?: A systematic review and meta-analysis of mindfulness and acceptance-based self-help interventions. Clinical Psychology Review

Effect size and sampling variance


        

Fixed effects model


        

Random effects model


        

[Criteria for checking heterogeneity]
I^2 (How much effect sizes across studies differ)
25-50: Little different
50-75: Quite different
75-100: Considerably different

Test for Heterogeneity: p-val < .05 (not homogeneous)

H (sqrt(H^2)) > 1: There is unexplained heterogeneity.



Forest plot (Fixed effects model)

Download the plot as pdf

Forest plot (Random effects model)

Download the plot as pdf

Funnel plot (Fixed effects model)

Download the plot as pdf

Open circles (if any) on the right side show missing NULL studies estimated with the trim-and-fill method, added in the funnel plot.


Funnel plot (Random effects model)

Download the plot as pdf

Open circles (if any) on the right side show missing NULL studies estimated with the trim-and-fill method, added in the funnel plot.



Publication Bias


        

Fail-safe N is the number of nonsignificant studies necessary to make the result nonsignificant. "When the fail-safe N is high, that is interpreted to mean that even a large number of nonsignificant studies may not influence the statistical significance of meta-analytic results too greatly." (Oswald & Plonsky, 2010) .


Weight-Function Model for Publication Bias

The p-value cut points can be changed in the Weight-Function Model settings


        

If the p-value for the likelihood ratio test is significant then there may be evidence of publication bias (Vevea & Hedges, 1995) .



Moderator (subgroup) analysis


        

Categorical moderator graph (Fixed effects model)


Categorical moderator graph (Random effects model)



R session and package information

      

Note: Input values must be separated by tabs. Copy and paste from Excel.

Your data needs to have exactly the same header (variable names) in the first row.


Mean Differences (n, M, SD)

Data for this example is from the following study.

Cavanagh, K., Strauss, C., Forder, L., & Jones, F. (2014). Can mindfulness and acceptance be learnt by self-help?: A systematic review and meta-analysis of mindfulness and acceptance-based self-help interventions. Clinical Psychology Review.


        
        

Mean Differences (n, Effect size d)


        
        

Correlations (n, r)

Data for this example is from the following study.

Molloy, G. J., O'Carroll, R. E., & Ferguson, E. (2014). Conscientiousness and medication adherence: A meta-analysis. Annals of Behavioral Medicine


        
        

Dichotomous (upoz, uneg, NU, kpoz, kneg, NK)


        
        

Note: Input values must be separated by tabs. Copy and paste from Excel.

Your data needs to have exactly the same header (variable names) in the first row.


IRR (categorical with two raters)


        
        

        

IRR (categorical with three or more raters)


        
        

        

IRR (continuous with two raters)


        
        

        

IRR (continuous with three or more raters)


        
        

        

Fisher’s r-to-z transformed correlation coefficient is the default estimator for the metafor package


      

logs odds ratio is the default option and is the one you should use for the example provided in the Input Examples tab.


      

Restricted maximum-likelihood is the default estimator for the metafor package

Knapp & Hartung Adjustment is turned off by the default in the metafor package

The Knapp and Hartung (2003) method is an adjustment to the standard errors of the estimated coefficients, which helps to account for the uncertainty in the estimate of the amount of (residual) heterogeneity and leads to different reference distributions.

References

Knapp, G., & Hartung, J. (2003). Improved tests for a random effects meta-regression with a single covariate. Statistics in Medicine, 22, 2693–2710.

Three different estimators for the number of missing studies were proposed by Duval and Tweedie (2000a, 2000b; see also Duval, 2005). The default estimator for the metafor package is L0


        

References

Duval, S. J., & Tweedie, R. L. (2000a). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56, 455–463.

Duval, S. J., & Tweedie, R. L. (2000b). A nonparametric trim and fill method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association, 95, 89–98.

Duval, S. J. (2005). The trim and fill method. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.) Publication bias in meta-analysis: Prevention, assessment, and adjustments (pp. 127–144). Chichester, England: Wiley.

Regression Test Options


Funnel Plot Options
Check this box if you would like to have your funnel plots contour enhanced see (Peters et al., 2008)
Check this box if you would like to see the full results from the fitted model

For more information about the different methods of detecting publication bias in a meta-analysis see (Jin, Zhou, & He, 2015)

References

Egger, M., Davey Smith, G., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal, 315, 629--634.

Jin, Zhi-Chao, Zhou, Xiao-Hua & He, Jia (2015). Statistical methods for dealing with publication bias in meta-analysis. Statistics in Medicine, 34, 343-360.

Peters, J. L., Sutton, A. J., Jones, D. R., Abrams, K. R., & Rushton, L. (2008). Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. Journal of Clinical Epidemiology, 61(10), 991–-996.

Sterne, J. A. C., & Egger, M. (2001). Funnel plots for detecting bias in meta-analysis: Guidelines on choice of axis. Journal of Clinical Epidemiology, 54(10), 1046--1055.


Weight-Function Model Options

Select at least one p-value cutpoint to include in your model. To include a cutpoint not provided, type it in and press enter.

For a more advanced options with this model see the authors shiny app at https://vevealab.shinyapps.io/WeightFunctionModel/

References

Coburn, K. M. & Vevea, J. L. (2015). Publication bias as a function of study characteristics. Psychological Methods, 20(3), 310.

Vevea, J. L. & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. Psychometrika, 60(3), 419-435.

Vevea, J. L. & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10(4), 428-443.

Coburn, K. M. & Vevea, J. L. (2017). weightr: Estimating Weight-Function Models for Publication Bias. R package version 1.1.2. https://CRAN.R-project.org/package=weightr


Method for running file drawer analysis. The default in the metafor package is Rosenthal

The Rosenthal method (sometimes called a ‘file drawer analysis’) calculates the number of studies averaging null results that would have to be added to the given set of observed outcomes to reduce the combined significance level (p-value) to a target alpha level (e.g., .05). The calculation is based on Stouffer’s method to combine p-values and is described in Rosenthal (1979).

The Orwin method calculates the number of studies averaging null results that would have to be added to the given set of observed outcomes to reduce the (unweighted) average effect size to a target (unweighted) average effect size. The method is described in Orwin (1983).

The Rosenberg method calculates the number of studies averaging null results that would have to be added to the given set of observed outcomes to reduce significance level (p-value) of the (weighted) average effect size (based on a fixed-effects model) to a target alpha level (e.g., .05). The method is described in Rosenberg (2005).


        

References

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638--641.

Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8, 157--159.

Rosenberg, M. S. (2005). The file-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59, 464--468.

Group 1:


Group 2:


Option:
Click here to update your results.

Checking the input data


Mean of the differences and 95% CI


          

t-test


          


          

Effect size indices


          

ANCOVA F-statistic to Effect Size


Click here to update your results


Effect size indices


          


Mean Values from ANCOVA F-statistic to Effect Size


Click here to update your results


Effect size indices


          


Chi-Squared Statistic to Effect Size


Click here to update your results


Effect size indices


          


Dichotomous Variables


Effect Size Estimates and Corresponding Sampling Variances


          

Group 1:


Group 2:


Effect Size Estimates and Corresponding Sampling Variances


          


Click here to update your results


Effect size indices


          



Proportions to Effect Size

Proportion One

Proportion Two


Click here to update your results


Effect size indices


          



p-value to Effect Size


Click here to update your results


Effect size indices


          



Single Case Design Type

Click here to update your results


Single Case Design Data Entry

The left column should contain the condition labels and the right column should contain the obtained scores


                
                

Below is your computed effect size, unless you've selected either Percentage of Nonoverlapping Data or Percentage of Data Points Exceeding the Median in which case the number below is the percentage.


                

References

Bulte, I., & Onghena, P. (2008). An R package for single-case randomization tests. Behavior Research Methods, 40, 467--478.

Bulte, I., & Onghena, P. (2009). Randomization tests for multiple baseline designs: An extension of the SCRT-R package. Behavior Research Methods, 41, 477--485.



About MAVIS

MAVIS was designed from the beginning to help users run a meta-analysis as effortlessly as possible. The software accomplishes this by leveraging the R programming language for data analysis and the Shiny package from RStudio to power the user interface and server software. These two things combined give MAVIS a positive user experience with an easy to use interface along with the power of R to provide the best possible user experience.


MAVIS Version 1.1.3

Last Updated July 7th 2017

Number of monthly downloads from CRAN


Acknowledgments

W. Kyle Hamilton would like to thank the Health Communications and Interventions Lab at the University of California, Merced for their comments and beta testing efforts on this application as well as Kathleen Coburn for her feedback and evaluation of the statistical methods related to this project.

Atsushi Mizumoto would like to thank Dr. Luke Plonsky and Dr. Yo In'nami for their support and feedback to create this web application.


Authors

W. Kyle Hamilton - University of California, Merced

W. Kyle Hamilton maintains this application and has authored new features.


Burak Aydin, PhD - RTE University

Burak Aydin is working on a Turkish version of MAVIS and contributed the dichotomous data entry feature.


Atsushi Mizumoto, PhD - Kansai University

Atsushi Mizumoto wrote the first version of this application; this application is a fork of the original which can be found here.


Contributors

Kathleen Coburn - University of California, Merced

Kathleen Coburn contributed technical advice on how to run a meta-analysis as well as information on publication bias.


Nicole Zelinsky - University of California, Merced

Nicole Zelinsky contributed the inter rater reliability module.


Bug Reports

If you discover a problem with MAVIS please submit it to the project GitHub page https://github.com/kylehamilton/MAVIS/issues

MAVIS is an Open Source project, you are more than welcome to submit patches or features and help the project grow.


Feedback about MAVIS

Feedback about your MAVIS experience is always welcome and highly encouraged!

Feel free to contact the project maintainer with any questions, user experiences, uses of MAVIS, or feature requests at kyle.hamilton@gmail.com


License

MAVIS: Meta Analysis via Shiny

Copyright 2016 W. Kyle Hamilton, Burak Aydin, and Atsushi Mizumoto

This program is free software you can redistribute it and or modify it under the terms of the GNU General Public License as published by the Free Software Foundation either version 3 of the License or at your option any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/gpl.html


Futher Infomation

If you would like to learn more about the GNU General Public License and what it means tl'dr legal has a simple explaination which can be found here https://www.tldrlegal.com/l/gpl-3.0


Support

If you're having problems with MAVIS feel free to refer to our GitHub wiki or the documentation available on CRAN.

CRAN page for MAVIS
GitHub Wiki page for MAVIS

As always you are more than welcome to contact the project maintainer at kyle.hamilton@gmail.com