"Artificial Intelligence (AI) is not new, but recent years has seen a growing concern about the technology's political, economic and social impact, including debates about its governance. Implemented through algorithms in an effort to have machines behave 'intelligently', the study of AI overlaps with a broader research agenda on the social impact of algorithmic decision-making that now permeates a range of societal process, from online search services, to high-frequency trading and autonomous vehicles. However, while the social ordering capacity of algorithms has received increasing attention by the social sciences in recent years, less focus has been put on the institutions and processes that govern their development and deployment. In fact, the governance of AI and algorithms is characterized by a puzzling tension whereby "on the one hand, algorithms are invoked as powerful entities that govern, judge, sort, regulate, classify, influence, or otherwise discipline the world. On the other hand, algorithms are portrayed as strangely elusive and inscrutable, or in fact as virtually unstudiable" (Barocas, Hood, and Ziewitz, 2013). While important contributions have looked at the governance of algorithms and AI in relation to ethical concerns of accountability and transparency, as well as compatibility with existing law the field is seemingly lacking a systematic inquiry into the broader governance system of AI. This lack of systematic inquiry constitute the main gap that this dissertation seek to address. Taken as a whole, the dissertation attempts to connect the dots between the literature on global governance and the technical properties of AI and artificial agents. In this light, it frames the main phenomena to be governed as one of artificial agency by drawing from the extensive literature on the social ordering by algorithms and artificial agents. In so doing it also seek to draw from, and connect to, a broader literature on global governance given the transnational nature of artificial agency. The goal is to provide a lens that enables further research in the field, and to provide empirical insights on the current system of governance for AI. Structured as a paper-based dissertation, the MPT is divided in four parts. The first part is a literature review of the broader strands of research to which the dissertation relate. The second part consist of a full draft of Paper 1, which outlines a conceptual framework for the governance of AI. This paper constitute the main part of the MPT, and is aimed to be finalized in the spring of 2019. The third part provides a preliminary outline of Paper 2, which seek to identify the actors involved in the governance of AI through the creation of a typology, drawing on insights from Paper 1. The fourth and final part provides a preliminary outline Paper 3 and its methodology. This paper seek to investigate shared principles and norms for the governance of artificial agency among the relevant actors, drawing from the findings of Paper 2."