Inside Modern Code Review Research: Themes, Gaps, and Priorities
hackernoon.com·13h
🌐WebAssembly
Preview
Report Post

1 INTRODUCTION

2 BACKGROUND AND RELATED WORK

3 RESEARCH DESIGN

4 MAPPING STUDY RESULTS

5 SURVEY RESULTS

6 COMPARING THE STATE-OF-THE-ART AND THE PRACTITIONERS’ PERCEPTIONS

1 INTRODUCTION

2 BACKGROUND AND RELATED WORK

3 RESEARCH DESIGN

4 MAPPING STUDY RESULTS

5 SURVEY RESULTS

6 COMPARING THE STATE-OF-THE-ART AND THE PRACTITIONERS’ PERCEPTIONS

7 DISCUSSION

8 CONCLUSIONS AND ACKNOWLEDGMENTS

REFERENCES

\

8 CONCLUSIONS AND ACKNOWLEDGMENTS

In this paper, we conducted a systematic mapping study and a survey to provide an overview of the different research themes on Modern Code Reviews (MCR) and analyze the practitioners’ opinions on the importance of those themes. Based on the juxtaposition of these two perspectives on MCR research, we outline an agenda for future research on MCR that is based on the identified research gaps and the perceived importance by practitioners.

\ We have extracted the research contributions from 244 primary studies and summarized 15 years of MCR research in evidence briefings that can contribute to the knowledge transfer from academic research to practitioners. The five main themes of MCR research are:

(1) support systems for code reviews (SS),

(2) impact of code reviews on product quality and human aspects (IOF),

(3) modern code review process properties (CRP),

(4) impact of software development processes, patch characteristics, and tools on modern code reviews (ION), and

(5) human and organizational factors (HOF). We conducted a survey to collect practitioners’ opinions about 46 statements representing the research in the identified themes.

\ As a result, we learned that practitioners are most positive about the CRP and IOF theme, with special focus on the impact of code reviews on product quality. However, these themes represent a minority of the reviewed MCR research (66 primary studies). At the same time, the respondents are most negative about human factor- (HOF) and support systems-related (SS) research, which constitute together a majority of the reviewed research (108 primary studies). These results indicate that there is a misalignment between the state-of-the-art and the themes deemed important by most respondents of our survey.

\ In addition, we found that the research that has been perceived positively by practitioners is generally also more frequently cited, i.e., has a larger research impact. Finally, as there has been an increased interest in reviewing MCR research in recent years, we analyzed other systematic literature reviews and mapping studies. Due to the different research questions of the respective studies, there is a varying overlap of the reviewed primary studies. Still, we find our observations on the potential gaps in MCR research corroborated. Analyzing the data extracted from the reviewed primary studies and guided by the answers from the survey, we propose nineteen new research questions we deem worth investigating in future MCR research.

ACKNOWLEDGMENTS

We would like to acknowledge that this work was supported by the Knowledge Foundation through the projects SERT – Software Engineering ReThought and OSIR Open-source inspired reuse (reference number 20190081) at Blekinge Institute of Technology, Sweden. We would also like to acknowledge all practitioners who contributed to our investigation.

REFERENCES

[1] Everton LG Alves, Myoungkyu Song, Tiago Massoni, Patricia DL Machado, and Miryung Kim. 2017. Refactoring inspection support for manual refactoring edits. IEEE Transactions on Software Engineering 44, 4 (2017), 365–383.

[2] Aybuke Aurum, Håkan Petersson, and Claes Wohlin. 2002. State-of-the-art: software inspections after 25 years. Software Testing, Verification and Reliability 12, 3 (2002), 133–154.

[3] Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In Proceedings International Conference on Software Engineering (San Francisco, CA, USA) (ICSE). IEEE, 712–721.

[4] Deepika Badampudi, Ricardo Britto, and Michael Unterkalmsteiner. 2019. Modern Code Reviews - Preliminary Results of a Systematic Mapping Study. In Proceedings of the Evaluation and Assessment on Software Engineering (EASE). ACM, Copenhagen, Denmark, 340–345.

[5] Deepika Badampudi, Michael Unterkalmsteiner, and Ricardo Britto. 2021. Evidence briefings on modern code reviews. https://doi.org/10.5281/zenodo.5093742

[6] Deepika Badampudi, MICHAEL UNTERKALMSTEINER, and RICARDO BRITTO. 2022. Data used in modern code review mapping study and survey. https://doi.org/10.5281/zenodo.7464947

[7] Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In International Conference on Software Maintenance and Evolution (ICSME). IEEE, 81–90.

[8] Marten Brouwer. 1999. Q is accounting for tastes. Journal of Advertising Research 39, 2 (1999), 35–35.

[9] Steven R Brown. 1993. A primer on Q methodology. Operant subjectivity 16, 3/4 (1993), 91–138.

[10] Bill Brykczynski. 1999. A survey of software inspection checklists. ACM SIGSOFT Software Engineering Notes 24, 1 (1999), 82.

[11] Bruno Cartaxo, Gustavo Pinto, Baldoino Fonseca, Márcio Ribeiro, Pedro Pinheiro, Sergio Soares, and Maria Teresa Baldassarre. 2019. Software Engineering Research Community Viewpoints on Rapid Reviews. In Proceedings of the 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) (ESEM ’19).

[12] Bruno Cartaxo, Gustavo Pinto, Elton Vieira, and Sérgio Soares. 2016. Evidence briefings: Towards a medium to transfer knowledge from systematic reviews to practitioners. In Proceedings of the 10th ACM/IEEE international symposium on empirical software engineering and measurement. 1–10.

[13] Jeffrey C Carver, Oscar Dieste, Nicholas A Kraft, David Lo, and Thomas Zimmermann. 2016. How practitioners perceive the relevance of esem research. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. 1–10. [14] H Alperen Çetin, Emre Doğan, and Eray Tüzün. 2021. A review of code reviewer recommendation studies: Challenges and future directions. Science of Computer Programming (2021), 102652.

[15] Zhiyuan Chen, Young-Woo Kwon, and Myoungkyu Song. 2018. Clone refactoring inspection by summarizing clone refactorings and detecting inconsistent changes during software evolution. Journal of Software: Evolution and Process 30, 10 (2018), e1951.

[16] Flavia Coelho, Tiago Massoni, and Everton LG Alves. 2019. Refactoring-aware code review: A systematic mapping study. In 2019 IEEE/ACM 3rd International Workshop on Refactoring (IWoR). IEEE, 63–66.

[17] D. S. Cruzes and T. Dyba. 2011. Recommended Steps for Thematic Synthesis in Software Engineering. In 2011 International Symposium on Empirical Software Engineering and Measurement. 275–284.

[18] Nicole Davila and Ingrid Nunes. 2021. A systematic literature review and taxonomy of modern code review. Journal of Systems and Software (2021), 110951.

[19] Charles H Davis and Carolyn Michelle. 2011. Q methodology in audience research: Bridging the qualitative/quantitative ‘divide’. Participations: Journal of Audience and Reception Studies 8, 2 (2011), 559–593.

[20] M. E. Fagan. 1976. Design and code inspections to reduce errors in program development. IBM Systems Journal 15, 3 (1976), 182–211. [21] Xavier Franch, Daniel Mendez, Andreas Vogelsang, Rogardt Heldal, Eric Knauss, Marc Oriol, Guilherme Travassos, Jeffrey Clark Carver, and Thomas Zimmermann. 2020. How do Practitioners Perceive the Relevance of Requirements Engineering Research? IEEE Transactions on Software Engineering (2020).

[22] Theresia Devi Indriasari, Andrew Luxton-Reilly, and Paul Denny. 2020. A Review of Peer Code Review in Higher Education. ACM Transactions on Computing Education (TOCE) 20, 3 (2020), 1–25.

[23] Martin Ivarsson and Tony Gorschek. 2011. A method for evaluating rigor and industrial relevance of technology evaluations. Empirical Software Engineering 16, 3 (2011), 365–395.

[24] Barbara A. Kitchenham and Stuart Charters. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE-2007-01. Software Engineering Group, Keele University and Department of Computer Science, University of Durham, United Kingdom.

[25] Sami Kollanus and Jussi Koskinen. 2009. Survey of software inspection research. The Open Software Engineering Journal 3, 1 (2009).

[26] Oliver Laitenberger and Jean-Marc DeBaud. 2000. An encompassing life cycle centric survey of software inspection. Journal of systems and software 50, 1 (2000), 5–31.

[27] David Lo, Nachiappan Nagappan, and Thomas Zimmermann. 2015. How practitioners perceive the relevance of software engineering research. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 415–425.

[28] F Macdonald, J Miller, A Brooks, M Roper, and M Wood. 1995. A review of tool support for software inspection. In Proceedings Seventh International Workshop on Computer-Aided Software Engineering. IEEE, 340–349.

[29] Joseph Maxwell. 1992. Understanding and validity in qualitative research. Harvard educational review 62, 3 (1992), 279–301.

[30] Sumaira Nazir, Nargis Fatima, and Suriayati Chuprat. 2020. Modern code review benefits-primary findings of a systematic literature review. In Proceedings of the 3rd International Conference on Software Engineering and Information Management. ACM, 210–215.

[31] Kai Petersen and Cigdem Gencel. 2013. Worldviews, Research Methods, and their Relationship to Validity in Empirical Software Engineering Research. In Proceedings of the 2013 Joint Conference of the 23nd International Workshop on Software Measurement (IWSM) and the 8th International Conference on Software Process and Product Measurement. 81–89.

[32] Kai Petersen, Sairam Vakkalanka, and Ludwik Kuzniarz. 2015. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology 64 (2015), 1 – 18.

[33] Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14, 2 (2009), 131–164.

[34] Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern Code Review: A Case Study at Google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (Gothenburg, Sweden) (ICSE-SEIP ’18). ACM, New York, NY, USA, 181–190.

[35] Mojtaba Shahin, Muhammad Ali Babar, and Liming Zhu. 2017. Continuous integration, delivery and deployment: a systematic review on approaches, tools, challenges and practices. IEEE Access 5 (2017), 3909–3943.

[36] Dong Wang, Yuki Ueda, Raula Gaikovina Kula, Takashi Ishio, and Kenichi Matsumoto. 2019. The Evolution of Code Review Research: A Systematic Mapping Study. arXiv:1911.08816 [cs.SE]

[37] Dong Wang, Yuki Ueda, Raula Gaikovina Kula, Takashi Ishio, and Kenichi Matsumoto. 2021. Can we benchmark Code Review studies? A systematic mapping study of methodology, dataset, and metric. Journal of Systems and Software (2021), 111009.

[38] Roel Wieringa, Neil Maiden, Nancy Mead, and Colette Rolland. 2005. Requirements Engineering Paper Classification and Evaluation Criteria: A Proposal and a Discussion. Requir. Eng. 11, 1 (Dec. 2005), 102–107.

[39] Claes Wohlin. 2014. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings 18th International Conference on Evaluation and Assessment in Software Engineering (EASE). ACM, London, UK, 1–10.

[40] Claes Wohlin, Per Runeson, Paulo Anselmo da Mota Silveira Neto, Emelie Engström, Ivan do Carmo Machado, and Eduardo Santana De Almeida. 2013. On the reliability of mapping studies in software engineering. Journal of Systems and Software 86, 10 (2013), 2594–2610.

[41] Aiora Zabala and Unai Pascual. 2016. Bootstrapping Q methodology to improve the understanding of human perspectives. PloS one 11, 2 (2016), e0148087.

MAPPING STUDY REFERENCES

[42] Toufique Ahmed, Amiangshu Bosu, Anindya Iqbal, and Shahram Rahimi. 2017. SentiCR: a customized sentiment analysis tool for code review interactions. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 106–111.

[43] Wisam Haitham Abbood Al-Zubaidi, Patanamon Thongtanunam, Hoa Khanh Dam, Chakkrit Tantithamthavorn, and Aditya Ghose. 2020. Workload-Aware Reviewer Recommendation Using a Multi-Objective Search-Based Approach. Association for Computing Machinery, New York, NY, USA, 21–30. https://doi.org/10.1145/3416508.3417115

[44] Adam Alami, Marisa Leavitt Cohn, and Andrzej Wąsowski. 2019. Why does code review work for open source software communities?. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 1073–1083.

[45] Eman Abdullah AlOmar, Hussein AlRubaye, Mohamed Wiem Mkaouer, Ali Ouni, and Marouane Kessentini. 2021. Refactoring practices in the context of modern code review: An industrial case study at Xerox. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 348–357.

[46] Everton LG Alves, Myoungkyu Song, and Miryung Kim. 2014. RefDistiller: a refactoring aware code review tool for inspecting manual refactoring edits. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 751–754.

[47] Hirohisa Aman. 2013. 0-1 Programming Model-Based Method for Planning Code Review Using Bug Fix History. In 2013 20th Asia-Pacific Software Engineering Conference (APSEC), Vol. 2. IEEE, 37–42.

[48] F. Armstrong, F. Khomh, and B. Adams. 2017. Broadcast vs. Unicast Review Technology: Does It Matter?. In 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST). 219–229.

[49] Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok. 2019. WhoDo: Automating Reviewer Suggestions at Scale. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Tallinn, Estonia) (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA, 937–945. https: //doi.org/10.1145/3338906.3340449

[50] Jai Asundi and Rajiv Jayant. 2007. Patch review processes in open source software development communities: A comparative case study. In 2007 40th Annual Hawaii International Conference on System Sciences (HICSS’07). IEEE, 166c–166c.

[51] Krishna Teja Ayinala, Kwok Sun Cheng, Kwangsung Oh, and Myoungkyu Song. 2020. Tool Support for Code Change Inspection with Deep Learning in Evolving Software. In 2020 IEEE International Conference on Electro Information Technology (EIT). IEEE, 013–017.

[52] Krishna Teja Ayinala, Kwok Sun Cheng, Kwangsung Oh, Teukseob Song, and Myoungkyu Song. 2020. Code Inspection Support for Recurring Changes with Deep Learning in Evolving Software. In 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). 931–942. https://doi.org/10.1109/COMPSAC48688.2020.0-149

[53] Muhammad Ilyas Azeem, Qiang Peng, and Qing Wang. 2020. Pull Request Prioritization Algorithm based on Acceptance and Response Probability. In 2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS). 231–242. https://doi.org/10.1109/QRS51102.2020.00041

[54] Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In Proceedings International Conference on Software Engineering (San Francisco, CA, USA) (ICSE). IEEE, 712–721.

[55] Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In Proceedings International Conference on Software Engineering. IEEE, 931–940.

[56] Vipin Balachandran. 2020. Reducing accidental clones using instant clone search in automatic code review. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). 781–783. https://doi.org/10.1109/ ICSME46990.2020.00089

[57] Faruk Balcı, Dilruba Sultan Haliloğlu, Onur Şahin, Cankat Tilki, Mehmet Ata Yurtsever, and Eray Tüzün. 2021. Augmenting Code Review Experience Through Visualization. In 2021 Working Conference on Software Visualization (VISSOFT). IEEE, 110–114.

[58] Mike Barnett, Christian Bird, Joao Brunet, and Shuvendu K Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 134–144.

[59] Tobias Baum, Fabian Kortum, Kurt Schneider, Arthur Brack, and Jens Schauder. 2017. Comparing pre-commit reviews and post-commit reviews using process simulation. Journal of Software: Evolution and Process 29, 11 (2017), e1865.

[60] Tobias Baum, Olga Liskin, Kai Niklas, and Kurt Schneider. 2016. A faceted classification scheme for change-based industrial code review processes. In 2016 IEEE International Conference on Software Quality, Reliability and Security (QRS). IEEE, 74–85.

[61] Tobias Baum, Olga Liskin, Kai Niklas, and Kurt Schneider. 2016. Factors influencing code review processes in industry. In Proceedings of the 2016 24th acm sigsoft international symposium on foundations of software engineering. 85–96.

[62] Tobias Baum and Kurt Schneider. 2016. On the need for a new generation of code review tools. In International Conference on Product-Focused Software Process Improvement. Springer, 301–308.

[63] Tobias Baum, Kurt Schneider, and Alberto Bacchelli. 2017. On the Optimal Order of Reading Source Code Changes for Review. In Software Maintenance and Evolution (ICSME), 2017 IEEE International Conference on. IEEE, 329–340.

[64] Tobias Baum, Kurt Schneider, and Alberto Bacchelli. 2019. Associating working memory capacity and code change ordering with code review performance. Empirical Software Engineering 24, 4 (2019), 1762–1798.

[65] O. Baysal, O. Kononenko, R. Holmes, and M. W. Godfrey. 2012. The Secret Life of Patches: A Firefox Case Study. In 2012 19th Working Conference on Reverse Engineering. 447–455.

[66] Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2016. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21, 3 (2016), 932–959.

[67] Andrew Begel and Hana Vrzakova. 2018. Eye Movements in Code Review. In Proceedings of the Workshop on Eye Movements in Programming (Warsaw, Poland) (EMIP ’18). ACM, New York, NY, USA, Article 5, 5 pages.

[68] Moritz Beller, Alberto Bacchelli, Andy Zaidman, and Elmar Juergens. 2014. Modern code reviews in open-source projects: Which problems do they fix?. In Proceedings of the 11th working conference on mining software repositories. 202–211

[69] Mario Bernhart and Thomas Grechenig. 2013. On the understanding of programs with continuous code reviews. In 21st International Conference on Program Comprehension (ICPC). IEEE, 192–198.

[70] Mario Bernhart, Andreas Mauczka, and Thomas Grechenig. 2010. Adopting code reviews for agile software development. In 2010 Agile Conference. IEEE, 44–47.

[71] M. Bernhart, S. Strobl, A. Mauczka, and T. Grechenig. 2012. Applying Continuous Code Reviews in Airport Operations Software. In 2012 12th International Conference on Quality Software. 214–219.

[72] Christian Bird, Trevor Carnahan, and Michaela Greiler. 2015. Lessons learned from building and deploying a code review analytics platform. In Proceedings of the 12th Working Conference on Mining Software Repositories. IEEE, 191–201.

[73] Amiangshu Bosu and Jeffrey C. Carver. 2012. Peer Code Review in Open Source Communities using Reviewboard. In Proceedings of the ACM 4th Annual Workshop on Evaluation and Usability of Programming Languages and Tools (Tucson, Arizona, USA) (PLATEAU ’12). ACM, New York, NY, USA, 17–24.

[74] Amiangshu Bosu and Jeffrey C Carver. 2013. Impact of peer code review on peer impression formation: A survey. In 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. IEEE, 133–142.

[75] Amiangshu Bosu and Jeffrey C Carver. 2014. Impact of developer reputation on code review outcomes in OSS projects: an empirical investigation. In Proceedings 8th international symposium on empirical software engineering and measurement. ACM, 33.

[76] Amiangshu Bosu, Jeffrey C Carver, Christian Bird, Jonathan Orbeck, and Christopher Chockley. 2016. Process aspects and social dynamics of contemporary code review: Insights from open source development and industrial practice at microsoft. IEEE Transactions on Software Engineering 43, 1 (2016), 56–75.

[77] Amiangshu Bosu, Jeffrey C Carver, Munawar Hafiz, Patrick Hilley, and Derek Janni. 2014. Identifying the characteristics of vulnerable code changes: An empirical study. In Proceedings 22nd International Symposium on Foundations of Software Engineering. ACM, 257–268.

[78] Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of useful code reviews: An empirical study at microsoft. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. IEEE, 146–156.

[79] Rodrigo Brito and Marco Tulio Valente. 2021. RAID: Tool support for refactoring-aware code reviews. In 2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC). IEEE, 265–275.

[80] Fabio Calefato, Filippo Lanubile, Federico Maiorano, and Nicole Novielli. 2018. Sentiment polarity detection for software development. Empirical Software Engineering 23, 3 (2018), 1352–1382.

[81] Nathan Cassee, Bogdan Vasilescu, and Alexander Serebrenik. 2020. The silent helper: the impact of continuous integration on code reviews. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 423–434.

[82] Maria Caulo, Bin Lin, Gabriele Bavota, Giuseppe Scanniello, and Michele Lanza. 2020. Knowledge transfer in modern code review. In Proceedings of the 28th International Conference on Program Comprehension. 230–240.

[83] Amudha Chandrika K. R. and J. Amudha. 2018. A fuzzy inference system to recommend skills for source code review using eye movement data. Journal of Intelligent and Fuzzy Systems 34, 3 (2018), 1743–1754.

[84] Ashish Chopra, Morgan Mo, Samuel Dodson, Ivan Beschastnikh, Sidney S Fels, and Dongwook Yoon. 2021. "@ alex, this fixes# 9": Analysis of Referencing Patterns in Pull Request Discussions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.

[85] Moataz Chouchen, Ali Ouni, Raula Gaikovina Kula, Dong Wang, Patanamon Thongtanunam, Mohamed Wiem Mkaouer, and Kenichi Matsumoto. 2021. Anti-patterns in modern code review: Symptoms and prevalence. In 2021 IEEE international conference on software analysis, evolution and reengineering (SANER). IEEE, 531–535.

[86] Moataz Chouchen, Ali Ouni, Mohamed Wiem Mkaouer, Raula Gaikovina Kula, and Katsuro Inoue. 2021. WhoReview: A multi-objective search-based approach for code reviewers recommendation in modern code review. Applied Soft Computing 100 (2021), 106908. https://doi.org/10.1016/j.asoc.2020.106908

[87] Aleksandr Chueshev, Julia Lawall, Reda Bendraou, and Tewfik Ziadi. 2020. Expanding the Number of Reviewers in Open-Source Projects by Recommending Appropriate Developers. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). 499–510. https://doi.org/10.1109/ICSME46990.2020.00054

[88] Flávia Coelho, Nikolaos Tsantalis, Tiago Massoni, and Everton LG Alves. 2021. An Empirical Study on RefactoringInducing Pull Requests. In Proceedings of the 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). 1–12.

[89] Atacílio Cunha, Tayana Conte, and Bruno Gadelha. 2021. Code Review is just reviewing code? A qualitative study with practitioners in industry. In Brazilian Symposium on Software Engineering. 269–274.

[90] Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down. In Proceedings of the 37th International Conference on Software Engineering - Volume 2 (Florence, Italy) (ICSE ’15). IEEE Press, 27–28.

[91] Anastasia Danilova, Alena Naiakshina, Anna Rasgauski, and Matthew Smith. 2021. Code Reviewing as Methodology for Online Security Studies with Developers-A Case Study with Freelancers on Password Storage. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 397–416.

[92] Manoel Limeira de Lima Júnior, Daricélio Moreira Soares, Alexandre Plastino, and Leonardo Murta. 2015. Developers assignment for analyzing pull requests. In Proceedings 30th Annual ACM Symposium on Applied Computing. ACM, 1567–1572.

[93] Marco di Biase, Magiel Bruntink, and Alberto Bacchelli. 2016. A security perspective on code review: The case of chromium. In 2016 IEEE 16th International Working Conference on Source Code Analysis and Manipulation (SCAM). IEEE, 21–30.

[94] Marco di Biase, Magiel Bruntink, Arie van Deursen, and Alberto Bacchelli. 2019. The effects of change decomposition on code review—a controlled experiment. PeerJ Computer Science 5 (2019), e193.

[95] Eduardo Witter dos Santos and Ingrid Nunes. 2017. Investigating the Effectiveness of Peer Code Review in Distributed Software Development. In Proceedings of the 31st Brazilian Symposium on Software Engineering (Fortaleza, CE, Brazil) (SBES’17). ACM, New York, NY, USA, 84–93.

[96] Tobias Dürschmid. 2017. Continuous Code Reviews: A Social Coding tool for Code Reviews inside the IDE. In Companion to the first International Conference on the Art, Science and Engineering of Programming. ACM, 41.

[97] Felipe Ebert, Fernando Castor, Nicole Novielli, and Alexander Serebrenik. 2017. Confusion detection in code reviews. In Software Maintenance and Evolution (ICSME), 2017 IEEE International Conference on. IEEE, 549–553.

[98] Felipe Ebert, Fernando Castor, Nicole Novielli, and Alexander Serebrenik. 2018. Communicative intention in code review questions. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 519–523.

[99] Felipe Ebert, Fernando Castor, Nicole Novielli, and Alexander Serebrenik. 2021. An exploratory study on confusion in code reviews. Empirical Software Engineering 26, 1 (2021), 1–48.

[100] Vasiliki Efstathiou and Diomidis Spinellis. 2018. Code review comments: language matters. In Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results. ACM, 69–72.

[101] Carolyn D Egelman, Emerson Murphy-Hill, Elizabeth Kammer, Margaret Morrow Hodges, Collin Green, Ciera Jaspan, and James Lin. 2020. Predicting developers’ negative feelings about code review. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 174–185.

[102] Ikram El Asri, Noureddine Kerzazi, Gias Uddin, Foutse Khomh, and MA Janati Idrissi. 2019. An empirical study of sentiments in code reviews. Information and Software Technology 114 (2019), 37–54.

[103] Muntazir Fadhel and Emil Sekerinski. 2021. Striffs: Architectural Component Diagrams for Code Reviews. In 2021 International Conference on Code Quality (ICCQ). IEEE, 69–78.

[104] George Fairbanks. 2019. Better Code Reviews With Design by Contract. IEEE Software 36, 6 (2019), 53–56. https: //doi.org/10.1109/MS.2019.2934192

[105] Yuanrui Fan, Xin Xia, David Lo, and Shanping Li. 2018. Early prediction of merged code changes to prioritize reviewing tasks. Empirical Software Engineering (2018), 1–48.

[106] Mikołaj Fejzer, Piotr Przymus, and Krzysztof Stencel. 2018. Profile based recommendation of code reviewers. Journal of Intelligent Information Systems 50, 3 (2018), 597–619.

[107] Isabella Ferreira, Jinghui Cheng, and Bram Adams. 2021. The" Shut the f** k up" Phenomenon: Characterizing Incivility in Open Source Code Review Discussions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–35.

[108] Wojciech Frącz and Jacek Dajda. 2017. Experimental Validation of Source Code Reviews on Mobile Devices. In International Conference on Computational Science and Its Applications. Springer, 533–547.

[109] Leonardo B Furtado, Bruno Cartaxo, Christoph Treude, and Gustavo Pinto. 2020. How successful are open source contributions from countries with different levels of human development? IEEE Software 38, 2 (2020), 58–63.

[110] Lorenzo Gasparini, Enrico Fregnan, Larissa Braz, Tobias Baum, and Alberto Bacchelli. 2021. ChangeViz: Enhancing the GitHub Pull Request Interface with Method Call Information. In 2021 Working Conference on Software Visualization (VISSOFT). IEEE, 115–119.

[111] Xi Ge, Saurabh Sarkar, and Emerson Murphy-Hill. 2014. Towards refactoring-aware code review. In Proceedings of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering. ACM, 99–102.

[112] Çağdaş Evren Gerede and Zeki Mazan. 2018. Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering & Computer Sciences 26, 3 (2018), 1343–1353.

[113] Daniel M German, Gregorio Robles, Germán Poo-Caamaño, Xin Yang, Hajimu Iida, and Katsuro Inoue. 2018. " Was My Contribution Fairly Reviewed?" A Framework to Study the Perception of Fairness in Modern Code Reviews. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). IEEE, 523–534.

[114] Mehdi Golzadeh, Alexandre Decan, and Tom Mens. 2019. On the Effect of Discussions on Pull Request Decisions.. In 18th Belgium-Netherlands Software Evolution Workshop (BENEVOL)

[115] Jesus M Gonzalez-Barahona, Daniel Izquierdo-Cortazar, Gregorio Robles, and Alvaro del Castillo. 2014. Analyzing gerrit code review parameters with bicho. Electronic Communications of the EASST (2014).

[116] Jesús M. González-Barahona, Daniel Izquierdo-Cortázar, Gregorio Robles, and Mario Gallegos. 2014. Code Review Analytics: WebKit as Case Study. In Open Source Software: Mobile Open Source Technologies. Springer, 1–10.

[117] Tanay Gottigundala, Siriwan Sereesathien, and Bruno da Silva. 2021. Qualitatively Analyzing PR Rejection Reasons from Conversations in Open-Source Projects. In 13th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). IEEE, 109–112. [118] Bo Guo, Young-Woo Kwon, and Myoungkyu Song. 2019. Decomposing composite changes for code review and regression test selection in evolving software. Journal of Computer Science and Technology 34, 2 (2019), 416–436.

[119] DongGyun Han, Chaiyong Ragkhitwetsagul, Jens Krinke, Matheus Paixao, and Giovanni Rosa. 2020. Does code review really remove coding convention violations?. In 2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM). IEEE, 43–53.

[120] Xiaofeng Han, Amjed Tahir, Peng Liang, Steve Counsell, and Yajing Luo. 2021. Understanding code smell detection via code review: A study of the openstack community. In 2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC). IEEE, 323–334.

[121] Quinn Hanam, Ali Mesbah, and Reid Holmes. 2019. Aiding code change understanding with semantic change impact analysis. In 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 202–212.

[122] Christoph Hannebauer, Michael Patalas, Sebastian Stünkel, and Volker Gruhn. 2016. Automatically recommending code reviewers based on their expertise: An empirical comparison. In Proceedings 31st International Conference on Automated Software Engineering. ACM, 99–110.

[123] Masum Hasan, Anindya Iqbal, Mohammad Rafid Ul Islam, A. J. M. Imtiajur Rahman, and Amiangshu Bosu. 2021. Using a Balanced Scorecard to Identify Opportunities to Improve Code Review Effectiveness: An Industrial Experience Report. 26, 6 (2021). https://doi.org/10.1007/s10664-021-10038-w

[124] Florian Hauser, Stefan Schreistter, Rebecca Reuter, Jurgen Horst Mottok, Hans Gruber, Kenneth Holmqvist, and Nick Schorr. 2020. Code Reviews in C++ Preliminary Results from an Eye Tracking Study. In ACM Symposium on Eye Tracking Research and Applications. 1–5.

[125] V. J. Hellendoorn, P. T. Devanbu, and A. Bacchelli. 2015. Will They Like This? Evaluating Code Contributions with Language Models. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. 157–167.

[126] Vincent J. Hellendoorn, Jason Tsay, Manisha Mukherjee, and Martin Hirzel. 2021. Towards Automating Code Review at Scale. Association for Computing Machinery, New York, NY, USA, 1479–1482. https://doi.org/10.1145/3468264.3473134

[127] Austin Z Henley, KIvanç Muçlu, Maria Christakis, Scott D Fleming, and Christian Bird. 2018. CFar: A Tool to Increase Communication, Productivity, and Review Quality in Collaborative Code Reviews. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 157.

[128] Martin Hentschel, Reiner Hähnle, and Richard Bubel. 2016. Can formal methods improve the efficiency of code reviews?. In International Conference on Integrated Formal Methods. Springer, 3–19.

[129] Toshiki Hirao, Akinori Ihara, and Ken-ichi Matsumoto. 2015. Pilot study of collective decision-making in the code review process. In Proceedings 25th Annual International Conference on Computer Science and Software Engineering. IBM, 248–251.

[130] Toshiki Hirao, Akinori Ihara, Yuki Ueda, Passakorn Phannachitta, and Ken-ichi Matsumoto. 2016. The impact of a low level of agreement among reviewers in a code review process. In IFIP International Conference on Open Source Systems. Springer, 97–110.

[131] Toshiki Hirao, Raula Gaikovina Kula, Akinori Ihara, and Kenichi Matsumoto. 2019. Understanding developer commenting in code reviews. IEICE Transactions on Information and Systems 102, 12 (2019), 2423–2432.

[132] Toshiki Hirao, Shane McIntosh, Akinori Ihara, and Kenichi Matsumoto. 2019. The Review Linkage Graph for Code Review Analytics: A Recovery Approach and Empirical Study. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Tallinn, Estonia) (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA, 578–589. https: //doi.org/10.1145/3338906.3338949

[133] Gerard J Holzmann. 2010. SCRUB: a tool for code reviews. Innovations in Systems and Software Engineering 6, 4 (2010), 311–318.

[134] Syeda Sumbul Hossain, Yeasir Arafat, Md Hossain, Md Arman, Anik Islam, et al. 2020. Measuring the effectiveness of software code review comments. In International Conference on Advances in Computing and Data Sciences. Springer, 247–257.

[135] Dongyang Hu, Yang Zhang, Junsheng Chang, Gang Yin, Yue Yu, and Tao Wang. 2019. Multi-reviewing pull-requests: An exploratory study on GitHub OSS projects. Information and Software Technology 115 (2019), 1–4.

[136] Yuan Huang, Nan Jia, Xiangping Chen, Kai Hong, and Zibin Zheng. 2018. Salient-class location: Help developers understand code change in code review. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. ACM, 770–774.

[137] Yuan Huang, Nan Jia, Xiangping Chen, Kai Hong, and Zibin Zheng. 2020. Code Review Knowledge Perception: Fusing Multi-Features for Salient-Class Location. IEEE Transactions on Software Engineering (2020).

[138] Yu Huang, Kevin Leach, Zohreh Sharafi, Nicholas McKay, Tyler Santander, and Westley Weimer. 2020. Biases and differences in code review using medical imaging and eye-tracking: genders, humans, and machines. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 456–468.

[139] M. Ichinco. 2014. Towards crowdsourced large-scale feedback for novice programmers. In 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 189–190.

[140] Daniel Izquierdo, Jesus Gonzalez-Barahona, Lars Kurth, and Gregorio Robles. 2018. Software Development Analytics for Xen: Why and How. IEEE Software (2018).

[141] Daniel Izquierdo-Cortazar, Lars Kurth, Jesus M Gonzalez-Barahona, Santiago Dueñas, and Nelson Sekitoleko. 2016. Characterization of the Xen project code review process: an experience report. In 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR). IEEE, 386–390.

[142] Daniel Izquierdo-Cortazar, Nelson Sekitoleko, Jesus M Gonzalez-Barahona, and Lars Kurth. 2017. Using Metrics to Track Code Review Performance. In Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering. ACM, 214–223. [143] Jing Jiang, Jin Cao, and Li Zhang. 2017. An empirical study of link sharing in review comments. In Software engineering and methodology for emerging domains. Springer, 101–114.

[144] Jing Jiang, Jia-Huan He, and Xue-Yuan Chen. 2015. Coredevrec: Automatic core member recommendation for contribution evaluation. Journal of Computer Science and Technology 30, 5 (2015), 998–1016.

[145] Jing Jiang, Yun Yang, Jiahuan He, Xavier Blanc, and Li Zhang. 2017. Who should comment on this pull request? Analyzing attributes for more accurate commenter recommendation in pull-based development. Information and Software Technology 84 (2017), 48–62.

[146] Marian Jureczko, Łukasz Kajda, and Paweł Górecki. 2020. Code review effectiveness: an empirical study on selected factors influence. IET Software 14, 7 (2020), 794–805.

[147] Akshay Kalyan, Matthew Chiam, Jing Sun, and Sathiamoorthy Manoharan. 2016. A collaborative code review platform for github. In 2016 21st International Conference on Engineering of Complex Computer Systems (ICECCS). IEEE, 191–196.

[148] Ritu Kapur, Balwinder Sodhi, Poojith U Rao, and Shipra Sharma. 2021. Using Paragraph Vectors to improve our existing code review assisting tool-CRUSO. In 14th Innovations in Software Engineering Conference (formerly known as India Software Engineering Conference). 1–11.

[149] David Kavaler, Premkumar Devanbu, and Vladimir Filkov. 2019. Whom are you going to call? determinants of@- mentions in github discussions. Empirical Software Engineering 24, 6 (2019), 3904–3932.

[150] Noureddine Kerzazi and Ikram El Asri. 2016. Who Can Help to Review This Piece of Code?. In Collaboration in a Hyperconnected World, Hamideh Afsarmanesh, Luis M. Camarinha-Matos, and António Lucas Soares (Eds.). Springer, 289–301.

[151] Shivam Khandelwal, Sai Krishna Sripada, and Y. Raghu Reddy. 2017. Impact of Gamification on Code Review Process: An Experimental Study. In Proceedings of the 10th Innovations in Software Engineering Conference (Jaipur, India) (ISEC ’17). ACM, New York, NY, USA, 122–126.

[152] Jungil Kim and Eunjoo Lee. 2018. Understanding Review Expertise of Developers: A Reviewer Recommendation Approach Based on Latent Dirichlet Allocation. Symmetry 10, 4 (2018), 114.

[153] N. Kitagawa, H. Hata, A. Ihara, K. Kogiso, and K. Matsumoto. 2016. Code Review Participation: Game Theoretical Modeling of Reviewers in Gerrit Datasets. In 2016 IEEE/ACM Cooperative and Human Aspects of Software Engineering (CHASE). 64–67.

[154] O. Kononenko, O. Baysal, and M. W. Godfrey. 2016. Code Review Quality: How Developers See It. In 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE). 1028–1038.

[155] Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In International Conference on Software Maintenance and Evolution (ICSME). IEEE, 111–120.

[156] O. Kononenko, T. Rose, O. Baysal, M. Godfrey, D. Theisen, and B. de Water. 2018. Studying Pull Request Merges: A Case Study of Shopify’s Active Merchant. In 2018 IEEE/ACM 40th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP). 124–133.

[157] V. Kovalenko and A. Bacchelli. 2018. Code Review for Newcomers: Is It Different?. In 2018 IEEE/ACM 11th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). 29–32. [158] Vladimir Kovalenko, Nava Tintarev, Evgeny Pasynkov, Christian Bird, and Alberto Bacchelli. 2018. Does reviewer recommendation help developers? IEEE Transactions on Software Engineering (2018).

[159] Andrey Krutauz, Tapajit Dey, Peter C Rigby, and Audris Mockus. 2020. Do code review measures explain the incidence of post-release defects? Empirical Software Engineering 25, 5 (2020), 3323–3356.

[160] Harsh Lal and Gaurav Pahwa. 2017. Code review analysis of software system using machine learning techniques. In Intelligent Systems and Control (ISCO), 2017 11th International Conference on. IEEE, 8–13.

[161] Samuel Lehtonen and Timo Poranen. 2015. Metrics for Gerrit code review. In Proceedings of the 14th Symposium on Programming Languages and Software Tools (SPLST’15) (CEUR Workshop Proceedings, Vol. 1525). CEUR-WS.org, 31–45.

[162] Heng-Yi Li, Shu-Ting Shi, Ferdian Thung, Xuan Huo, Bowen Xu, Ming Li, and David Lo. 2019. DeepReview: Automatic Code Review Using Deep Multi-instance Learning. In Advances in Knowledge Discovery and Data Mining, Qiang Yang, Zhi-Hua Zhou, Zhiguo Gong, Min-Ling Zhang, and Sheng-Jun Huang (Eds.). Springer International Publishing, Cham, 318–330.

[163] Zhixing Li, Yue Yu, Gang Yin, Tao Wang, Qiang Fan, and Huaimin Wang. 2017. Automatic Classification of Review Comments in Pull-based Development Model.. In SEKE. 572–577.

[164] Zhi-Xing Li, Yue Yu, Gang Yin, Tao Wang, and Huai-Min Wang. 2017. What are they talking about? analyzing code reviews in pull-based development model. Journal of Computer Science and Technology 32, 6 (2017), 1060–1075.

[165] J. Liang and O. Mizuno. 2011. Analyzing Involvements of Reviewers through Mining a Code Review Repository. In 2011 Joint Conference of the 21st International Workshop on Software Measurement and the 6th International Conference on Software Process and Product Measurement. 126–132.

[166] Zhifang Liao, Yanbing Li, Dayu He, Jinsong Wu, Yan Zhang, and Xiaoping Fan. 2017. Topic-based Integrator Matching for Pull Request. In Global Communications Conference. IEEE, 1–6.

[167] Jakub Lipcak and Bruno Rossi. 2018. A large-scale study on source code reviewer recommendation. In 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 378–387.

[168] Mingwei Liu, Xin Peng, Andrian Marcus, Christoph Treude, Xuefang Bai, Gang Lyu, Jiazhan Xie, and Xiaoxin Zhang. 2021. Learning-based extraction of first-order logic representations of API directives. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 491–502.

[169] L. MacLeod, M. Greiler, M. Storey, C. Bird, and J. Czerwonka. 2018. Code Reviewing in the Trenches: Challenges and Best Practices. IEEE Software 35, 4 (2018), 34–42.

[170] Michał Madera and Rafał Tomoń. 2017. A case study on machine learning model for code review expert system in software engineering. In 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 1357–1363.

[171] Mika V Mäntylä and Casper Lassenius. 2008. What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering 35, 3 (2008), 430–448.

[172] Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2016. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering 21, 5 (2016), 2146–2189.

[173] Massimiliano Menarini, Yan Yan, and William G Griswold. 2017. Semantics-assisted code review: An efficient tool chain and a user study. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 554–565.

[174] Andrew Meneely, Alberto C Rodriguez Tejeda, Brian Spates, Shannon Trudeau, Danielle Neuberger, Katherine Whitlock, Christopher Ketant, and Kayla Davis. 2014. An empirical investigation of socio-technical code review metrics and security vulnerabilities. In Proceedings 6th International Workshop on Social Software Engineering. ACM, 37–44.

[175] Benjamin S. Meyers, Nuthan Munaiah, Emily Prud’hommeaux, Andrew Meneely, Josephine Wolff, Cecilia Ovesdotter Alm, and Pradeep Murukannaiah. 2018. A dataset for identifying actionable feedback in collaborative software development. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Melbourne, Australia, 126–131. https://doi.org/10.18653/v1/P18-2021

[176] Ehsan Mirsaeedi and Peter C. Rigby. 2020. Mitigating Turnover with Code Review Recommendation: Balancing Expertise, Workload, and Knowledge Distribution. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE ’20). Association for Computing Machinery, New York, NY, USA, 1183–1195. https://doi.org/10.1145/3377811.3380335 [177] Rahul Mishra and Ashish Sureka. 2014. Mining peer code review system for computing effort and contribution metrics for patch reviewers. In 2014 IEEE 4th Workshop on Mining Unstructured Data (MUD). IEEE, 11–15.

[178] Rodrigo Morales, Shane McIntosh, and Foutse Khomh. 2015. Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 171–180.

[179] Sebastian Müller, Michael Würsch, Thomas Fritz, and Harald C Gall. 2012. An approach for collaborative code reviews using multi-touch technology. In Proceedings of the 5th International Workshop on Co-operative and Human Aspects of Software Engineering. IEEE, 93–99.

[180] Nuthan Munaiah, Benjamin S Meyers, Cecilia O Alm, Andrew Meneely, Pradeep K Murukannaiah, Emily Prud’hommeaux, Josephine Wolff, and Yang Yu. 2017. Natural language insights from code reviews that missed a vulnerability. In International Symposium on Engineering Secure Software and Systems. Springer, 70–86.

[181] Yukasa Murakami, Masateru Tsunoda, and Hidetake Uwano. 2017. WAP: Does Reviewer Age Affect Code Review Performance?. In International Symposium on Software Reliability Engineering (ISSRE). IEEE, 164–169.

[182] Emerson Murphy-Hill, Jillian Dicker, Margaret Morrow Hodges, Carolyn D Egelman, Ciera Jaspan, Lan Cheng, Elizabeth Kammer, Ben Holtz, Matt Jorde, Andrea Knight, and Collin Green. 2021. Engineering Impacts of Anonymous Author Code Review: A Field Experiment. IEEE Transactions on Software Engineering (2021), 1–1. https://doi.org/10. 1109/TSE.2021.3061527

[183] Reza Nadri, Gema Rodriguez-Perez, and Meiyappan Nagappan. 2021. Insights Into Nonmerged Pull Requests in GitHub: Is There Evidence of Bias Based on Perceptible Race? IEEE Softw. 38, 2 (2021), 51–57.

[184] Aziz Nanthaamornphong and Apatta Chaisutanon. 2016. Empirical evaluation of code smells in open source projects: preliminary results. In Proceedings 1st International Workshop on Software Refactoring. ACM, 5–8.

[185] Takuto Norikane, Akinori Ihara, and Kenichi Matsumoto. 2018. Do Review Feedbacks Influence to a Contributor’s Time Spent on OSS Projects?. In International Conference on Big Data, Cloud Computing, Data Science & Engineering (BCD). IEEE, 109–113.

[186] Sebastiaan Oosterwaal, Arie van Deursen, Roberta Coelho, Anand Ashok Sawant, and Alberto Bacche

Similar Posts

Loading similar posts...