By Irene K. F. Kirchner (Georgetown University)
This essay is part of the Islamic Law Blog’s Roundtable on Islamic Legal History & Historiography, edited by Intisar Rabb (Editor-in-Chief) and Mariam Sheibani (Lead Blog Editor), and introduced with a list of further readings in the short post by Intisar Rabb: “Methods and Meaning in Islamic Law: Introduction.”
Recurrent topics of the Roundtable on “Methods and Meaning in Islamic law” revolve around the question: What is Islamic law and where does the interpretative authority in answering this question lie? In the past or in the present, in theory or in practice, in a set of texts or in lived culture? This fundamental question is very much a methodological one, because, as scholars of Islamic law and legal history, we design our corpus of primary and secondary sources accordingly and assign interpretative authority in this process: We decide whose voices are heard and formulate our models of Islamic legal history and culture based on these decisions. In the following anecdotal report, I would like to reflect on how my engagement with digital technologies and quantitative methods during my research on the sharī‘a-compliancy of cryptocurrencies has impacted my scholarship: firstly, in defining what constitutes a source of legal literature, secondly, in locating relevant sources, and, thirdly, in filtering my search results and establishing the most authoritative sources.
Cryptocurrencies are digital assets that facilitate a transfer of information, commonly a transfer of value (money) as in Bitcoin, but also a transfer of contractual information (smart contracts) as in Ethereum, and they are traded on online platforms and digital apps with consequences for questions of jurisdiction and liability. The essay does not aim to make a definite statement on whether cryptocurrencies are sharī‘a-compliant. Rather, mixing a normative and descriptive approach, it outlines historical definitions of money, tradeable property, and contract of sale laws and their evolution and provides a survey of the debate among Muslim scholars and the Muslim community.
This task proved to be a very complex project: Islamic law is not a fixed code of law but a set of jurisprudential interpretations that differ over time, across and within legal schools, and in theory and practice. Islamic banking and finance is itself an emerging legal culture and set of theories, practices, and financial products that motivates a new engagement with and reconceptualization of classical Islamic law in order to adapt it to today’s political and economic realities. Finally, the debate on the sharī‘a-compliancy of cryptocurrencies is shaped by the competing economic and political interests of governments and legislators, religious scholars, companies and business consultants, private investors, and crypto-users. So, where does the interpretive authority lie and what sources would I consider in my research?
In his contribution to this month’s Roundtable, Najam Haider welcomes the renewed interest in geography in the study of Islamic legal history in order to further explore how legal theories interrelate with local legal practices. In her essay, Rula J. Abisaab calls us to “read around and outside the legal text itself” in order to unfold how Islamic law is shaped by structures of power and in relation to the respective socio-political, economic or theological context. In line with these two approaches, I have come to view “the digital” as a legal space with its own culture, power structures, socio-political realities and various agents promoting different agendas.
I decided to embrace the challenge as an opportunity to rethink my definition of what constitutes legal literature: I included online fatwās and publications from religious scholars, press releases and fatwās by state legislators, online journals, business plans, certificates by sharī‘a consultancies, conference proceedings from the Accounting and Auditing Organization for Islamic Financial Institutions (AAOIFI) as well as social media posts from Reddit, YouTube videos and, very importantly, comments from comment sections that allow for an interactive dialogue between the authors of these various legal viewpoints and their audience. After all, the inclusive and interactive nature of the internet allows for a much wider (global) audience and simultaneously archives the participation of this audience in legal discussions. Additionally, I studied the classical legal texts that are most frequently quoted in this debate (like Ibn Rushd’s Bidāyat Al-Mujtahid) as well as secondary literature on the history of Islamic economic theory and practice in order to provide a theoretical background and normative frame. By choosing a comprehensive definition of what constitutes a source of legal culture, I aimed to avoid overestimating the interpretive authority of a small handful of legal texts and a small scholarly elite on the one hand and to avoid presenting current practices as a continuation of historical practices without any further critical review on the other.
This comprehensive approach brought a challenge: I could only analyze a portion of the sources that the internet provided me with and was, yet again, faced with making choices on whom to assign interpretive authority to. In fact, just by engaging with the internet as an archive and relying on search algorithms such as Google Search, I had already made certain choices. Whether I used Google or Google Scholar, YouTube’s or Reddit’s search interfaces or academic search engines such as HoyaSearch (the library search interface of Georgetown University) or JSTOR, I located and chose the sources for my research based on search results that were generated and ranked by an algorithm: In short, search engines made these choices for me.
By relying on search engines in order to locate and choose my research corpus of primary and secondary sources I have employed quantitative methods that come with implicit assumptions on what relevance and authority mean and how they can be measured. As all of these search engines are proprietary, it is impossible to reflect on how exactly these search engines establish relevance. What we do know, in general, is that relevance is measured according to 1) how frequently the keywords specified in the search query appear in the title, URL, metadata, and content of a post or a website and, here, quantitative methods such as Latent Dirichlet Allocation (LDA) and Vector Space Models (VSM) are employed in order to weigh different numbers, and 2) how popular a website or post is, meaning how much traffic and clicks it has received with giving extra weight to recent traffic and clicks (which means it is “trending”) and how often a website or post is referenced and “re-posted,” “shared,” or linked to. Upon inquiry, I also learned that Georgetown library manually adjusts the rankings of local collections and works so that they are promoted and ranked above others that are not located at Georgetown. Relevance is, thus, also measured in availability in this case in that locally available sources are ranked as more relevant by the search algorithm.
Scholars have been using search engines in order to find and locate relevant material for long enough to know that we need to experiment with different keywords, synonymous terms, and search parameters in order to get meaningful results. Search engines have become quite successful in measuring content relevance and frequently prove to be more flexible and accurate than manual subject tags. But, as scholars of the humanities, do we share the assumption that popularity and availability are good measurements of relevance?
Following up on intertextual references and footnotes, consulting the expertise of established experts, and continuously adding to the original corpus of primary and secondary sources will prevent a thorough scholar from solely relying on search algorithms in order to collect a corpus of relevant source material. The quantitative method of search engines in building our research corpora often just complements a qualitative approach that relies on expertise and experience. Yet, one could argue that intertextual references, footnotes, and expert recommendations are just another way to establish relevance based on popularity: As noted, in my study of the classical legal literature, I focused on those texts that were frequently quoted by Muslim scholars and the secondary literature, i.e. on those texts that already enjoyed a certain popularity within the scholarly community (like Ibn Rushd’s Bidāyat Al-Mujtahid). Also, when we, as scholars, walk into university libraries to browse bookshelves or visit a book fair, we rely on availability (and lack thereof) as a measure of relevance.
Are search engines then introducing an entirely new method or do they follow long-standing assumptions on what constitutes relevance and interpretive authority? What has become clear to me during my research is that quantitative methods are neither inferior nor superior to qualitative methods, they are not more or less objective nor are they radically different in their implicit methodological assumptions. But we should reflect on what these methodological assumptions are and how we can purposefully use them.
In my attempt to locate the most authoritative positions, I made the conscious choice to rely on quantitative data such as number of clicks, views, likes, ratings, upvotes, and shares wherever available. I did not mine this metadata but rather experimented with the ranking algorithms of different search engines to rank according to popularity, i.e. according to “rating” or number of “views” on YouTube, number of “citations” in Google Scholar, and “hot” (number of recent comments) or “top” (number of upvotes) on Reddit. For example, I chose to include Almir Colan’s YouTube video on the sharī‘a-compliancy of cryptocurrencies because it ranked highest in number of views. The underlying assumption of this approach is that these clicks and ratings are a measurement of the community’s engagement with a post and thus measure the rate of its reception. This way, popularity as a measurement of relevance may prove to be a feature not a bug and may be consciously employed as a research method. However, this approach favors trending perspectives and thus creates a new form of orthodoxy that it is almost impossible to escape: None of the above-mentioned search engines allow ranking according to “formerly popular” or “less popular.” This is a well-studied phenomenon and has been explored, for example, in the context of fake news. Again, we may argue that we are all biased by our education to follow the latest scholarly trends as we are building on the expertise of our teachers and colleagues. Nonetheless, it is important to note that “the digital” principally favors the (very recent) present over the past in ranking for relevance and we risk ignoring authoritative tradition.
The digitization efforts of private and academic institutions and the internet as an archive have made available a tremendous amount of new primary and secondary sources. This large scale of available literature is exactly what compels scholars to employ quantitative methods in our research, even if it is just in using search engines in order to filter the vast number of results. Quantitative methods enable us to analyze this vast corpus of sources on a new scale and I see great value in employing quantitative methods and digital tools in research. I also suggest that quantitative methods are not a radical break with our “traditional” methods in the humanities. Instead, scholars have been using quantitative methods for much longer and much more frequently than is often recognized as we employ search engines (academic or not) in our research. What we need now is an increasingly conscious and informed engagement with the digital as a method that acknowledges and reflects its implicit methodological assumptions.
 Irene K. F. Kirchner, “Are Cryptocurrencies ḥalāl? On the Sharia-Compliancy of Blockchain-Based Fintech,” Islamic Law and Society 28, nos. 1-2 (2020): 76-112.
 Satoshi Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System,” Bitcoin.org, 2008, https://bitcoin.org/bitcoin.pdf. To those readers who are unfamiliar with cryptocurrencies and the Blockchain technology, I highly recommend YouTube tutorials for a first overview; for example 3Blue1Brown, “But how does bitcoin actually work?,” YouTube, July 7, 2017, https://www.youtube.com/watch?v=bBC-nXj3Ng4&ab_channel=3Blue1Brown.
 See Rosario Girasa, Regulation of cryptocurrencies and blockchain technologies: national and international perspectives (Switzerland: Springer-Verlag, 2018).
 For an overview of the history of Islamic banking and finance and introductory bibliographies on the topic, see EI3, s.v. Finance (Kilian Bälz) and EI3, s.v. Banks and banking, modern (Timur Kuran).
 For an overview of the debate on the sharī‘a-compliancy of cryptocurrencies, see the section on “Cryptocurrencies in the Muslim world” in Kirchner, “Are Cryptocurrencies ḥalāl?”.
 Najam Haider, “Future avenues in the study of Islamic law,” Islamic Law Blog, December 22, 2020, https://islamiclaw.blog/2020/12/22/future-avenues-in-the-study-of-islamic-law/
 Rula J. Abisaab, “Writing Islamic Legal History,” Islamic Law Blog, December 24, 2020, https://islamiclaw.blog/2020/12/24/writing-islamic-legal-history/
 The links provided here serve as examples. For a comprehensive bibliography, please refer to the original article.
 Ted Underwood, “Theorizing Research Practices We Forgot to Theorize Twenty Years Ago,” Representations 127, no. 1 (August 2014): 64–72.
 Latent Dirichlet Allocation is a method of topic modeling. See David M. Blei, Andrew Y. Ng, and Michael I. Jordan, “Latent dirichlet allocation,” Journal of machine learning research 3 (2003): 993-1022.
 Vector space models are a quantitative model for measuring text and word similarities and are frequently employed in information retrieval and relevance rankings. See Peter D. Turney and Patrick Pantel, “From frequency to meaning: Vector space models of semantics,” Journal of artificial intelligence research 37 (2010): 141-188.
 I would like to thank Melissa Jones, Georgetown University librarian, for her time and effort in answering my persistent questions.
 Almir Colan is a consultant of the Accounting and Auditing Organization for Islamic Financial Institutions (AAOIFI) and a director of the Australian Centre for Islamic Finance (AUSCIF). See https://www.almircolan.com/.
 See, for example, Yotam Shmargad, and Samara Klar, “Sorting the News: How Ranking by Popularity Polarizes Our Politics”, Political Communication, 37:3 (2020): 423-46; David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain, “The Science of Fake News,” Science 359.6380 (March 2018): 1094-96; Bente Kalsnes, “Fake news,” Oxford Research Encyclopedia of Communication, September 26, 2018, https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-809.
 In fact, the Digital Humanities were born out of efforts to build a concordance of Thomas Aquinas’ works (Index Thomisticus). See also Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Illinois: University of Illinois Press, 2011).
(Suggested Bluebook citation: Irene K. F. Kirchner, Measuring interpretive authority: a methodological reflection, Islamic Law Blog (Feb. 12, 2021), https://islamiclaw.blog/2021/02/12/measuring-interpretive-authority-a-methodological-reflection/)
(Suggested Chicago citation: Irene K. F. Kirchner, “Measuring interpretive authority: a methodological reflection,” Islamic Law Blog, February 12, 2021, https://islamiclaw.blog/2021/02/12/measuring-interpretive-authority-a-methodological-reflection/)