IEEE Blockchain Podcast Series: Episode 6


Michael ZarghamA Conversation with Michael Zargham
Founder and CEO of BlockScience

Listen to Episode 6 (MP3, 61 MB)


Part of the IEEE Blockchain Podcast Series


Episode Transcript:

Brian Walker: Welcome to the IEEE Blockchain podcast series, an IEEE digital studio production. This new blockchain series, entitled Research Notes in Blockchain, is hosted by Quinn DuPont, assistant professor at the University College Dublin School of Business, and the author of Cryptocurrencies and Blockchains. This episode features Dr. Michael Zargham, founder and CEO of BlockScience. Dr. Zargham discusses how his research has led to mathematical tools that can help define the principles of crypto economic systems and address the governance and regulation of blockchain infrastructures. He further explains how blockchains, as complex systems, shaped his engineering approach to cryptoeconomics, and shares his views on purpose-driven tokens, and the role of engineers within the crypto space.

Quinn DuPont: So, let's start with-- maybe you could just tell me a little bit, actually, about your research background, because you've got a rather interesting research background, and I would like to know a little bit more about how that informs how you approach cryptoeconomics.

Michael Zargham: Sure. I'll give you the short version of the long story, which is, I was studying, mostly, sort of mathematics in high school, like college-level mathematics, and then I started doing aerospace materials, which led me in the track of dealing with high-uncertainty physical systems, which needed to have sort of high-certainty properties, where you actually made stuff and broke it to see if it worked. And then I went to college, and then I started shifting over to control theory, multi-agent systems. I worked with a guy named Reza Olfati-Saber, who's a-- basically, a multi-agent systems expert. And I did a little bit of work on social systems, and a little bit of work on robotics and multi-agent control. He introduced me to Ali Jadbabaie, who was at Penn at the time, and he worked on this sort of mixture of applications of these large-scale network resource allocation policies in multi-agent control, applied to social and economic systems. He had appointments in computer science, engineering, and Wharton. He's now the Director of Sociotechnical Systems, and the civil engineering department chair at MIT, and he still works on sort of these large-scale data-driven platforms. And so, my research background is largely informed by that arc. I bounce back and forth between sort of social systems and game theory, and how do people behave, and how do actual computational systems distribute and scale. And in particular, my work at Penn included dynamic resource allocation in networks, ranging from packet-routing problems to cascade failures, and even, believe it or not, pandemic <laughs> response-type analysis.

Quinn DuPont: Brilliant. So, maybe you could tell me a little bit, before we get into your paper on the foundations of cryptoeconomic systems, maybe you could tell me a little bit more about your present research, what you're up to right now, and maybe even say something about BlockScience.

Michael Zargham: So, I run a private R&D firm called BlockScience. BlockScience is essentially an institutional placeholder for me wanting to do work that's neither fully applied nor fully theoretical. Like, I noticed that I could do things in a sort of academic regime, where I felt like I was solving toy problems, which might be relevant to an application at some point, but not directly. And on the other hand, industry settings were sort of over-tuned towards the problems immediately being faced by teams, and I wanted to do something in the middle. And so, to my current research, my in-the-middle work, it revolves around, probably, two main foci. The first one is what I call generalized dynamical systems. Really, that's a name for it, but it is a generalization of the sort of standard controls and signal processing framing around real numbers, complex numbers, and sort of these nice, continuous things. In a crypto setting, we have really clear, explicit, dynamical systems over state machines, so we tend to get hybrid systems. We tend to get multiscale hybrid systems, where you're enforcing some control rules at one level, but there's a degree of freedom of individual agents making their own choices at another. And so, our generalized dynamical systems work basically encodes the same metapattern of a control system, which is sort of some inputs that come from some function; you know, G of X yields U, and you plug that U into a state update. F of X comma U, and you get a new state. Or, in a continuous system, you have a differential. But the gist of it is that you're describing the world in terms of inputs, and then system plants, that sort of mutate the state, and you get new states. And it turns out, you can describe an arbitrary state machine this way. And in particular, when you're dealing with cryptoeconomics and blockchains, you get this nice property that you can enforce the admissible inputs. You can basically say, "Anything that doesn't do what I don't want, didn't happen." <laughs> And so you get this extra layer of mathematical formalism, where you can declare, explicitly, the set of actions that are legal, which kind of yields a differential inclusion. So, you get the set of things that could happen, or the changes in the system, in terms of these ever-expanding sets of reachable states. And what's really cool about this is, we use these kinds of methods in robotics, and this is how you assert the sort of configuration space, or reachability conditions, and those exact kinds of mathematical tools turn out to be really useful for defining properties of cryptoeconomic systems, where you can't actually declare the behavior of the agents. You could try to nudge them in one direction or another, but ultimately, they can do anything they're allowed to do. So, you would characterize the system in terms of the states it can achieve, based on what the mechanisms allow you to do. And again, this tethers really nicely to, I would say, the-- mostly, the robotics domain. In particular, there's a focus on being able to express these things in terms of dynamical systems, and therefore make arguments about the conditions under which the energy sort of expands or contracts, Lyapunov arguments. Again, this is building very much on the sort of systems engineering, signal processing estimation, and control paradigms, and we have built this open-source Python package called cadCAD, which is an implementation of these generalized dynamical systems, that allows us to do computational experiments, and actually A/B-test mechanism designs, or A/B-test behavioral assumptions. And through extending our computational tools, we can sort of do actual science on our designs, in the way that you would do science on a design of a physical system, albeit with an understanding that the set of things you have to make assumptions about is larger. So, we partition things into controlled and uncontrolled variables. Controlled variables are parameters or mechanisms we set or design, and uncontrolled things are related to behavioral assumptions or environmental conditions. And much like you might for a bridge or a building, you sort of look at the conditions under the uncontrolled things, where your design meets certain properties. And we call that package, or that sort of methodology, cadCAD-- complex adapted dynamics Computer-Aided Design-- and the research has really been about how to make this kind of workflow fast enough to be viable in a real-world setting, so it's not just, you know, sit back in armchair about the system, but actually, as it's coming to life, modeling and iterating on the models rapidly enough to actually inform design decisions. So, I guess, that is one big focus of our work. The complement to that is more social. It is related to how these things that we design actually manifest and people experience them. I do a little bit of collaboration with some folks from RMIT, in their sort of automation and society and blockchain groups, looking at the ethnography of digital infrastructures, and talking about how these things that we build actually affect people. And I make this point in the context of engineering, because whenever you have an infrastructure that people are using, it's not good enough to just say that it's been deployed. It needs to fulfill some function in society. And so, the effort to sort of validate from outside looking in, as opposed to-- or, I mean, I guess, inside looking out is another way to think about it. The relationship between your technology and its users and their experiences, and even their safety-- we'll get onto that in a second-- this sort of relates to how things are governed. Because, ultimately, you build and deploy an automation technology, and it fulfills its function, up unto the point that you need something else, or whether there's gaming going on, and you need the system to be able to move in response to it, or whether, maybe, it just didn't actually accomplish the intended function, and you need to change some parameters, or adjust the design in time, in order for it to continue to fulfill its function. That kind of interplay between the observation of the current state of the infrastructure, the design... I really rely on some of the ideas that are presented by Nancy Leveson, in her book Engineering a Safer World. And she's a systems engineer, aerospace specialty, at MIT, and this particular book is one I give out to clients and team members, to talk about how we manage the relationship between these things that we design and build and operate and maintain, with the sort of people who are exposed to them. And I personally think that this is a much more appropriate paradigm for not just governing, but also regulating these sort of blockchain infrastructures; that we can't regulate them quite like finance, with hard rules, you know, that you check the box, because there's just too big of a design space. But you can regulate them like you would engineering infrastructure, where there are certain processes, due diligence, certain levels of, "Okay, did you do these things and check that all of this is good? Have you made sure that the sort of end users of this system are safe from things that you couldn't reasonably expect them to understand or evaluate?" The thing I don't like about the mentality in crypto right now-- it's fading, but there's very much a, like, "If you can't check it for yourself, then just deal with it." And I think that the infrastructure analogy allows us to say, "That's not true." Actually, as the engineer, or even as the operating entity or the governing entity, you have some obligation to try to make sure that the system is safe for its users, up to the point where they can use it in a user journey that makes sense for them, without you saying, "Well, you just drive over the bridge, and you assume it's safe, and it's your fault if it falls down, because you didn't check." I'm like, "You're kidding me." Like, I want you to be able to drive over the bridge and feel safe driving over it, or I have no business putting it up.

Quinn DuPont: Yeah, that's such a helpful insight. And it brings us back, if I may, to where you really start your Foundations of Cryptoeconomics Systems paper, by looking at, or characterizing it, as a complex system. So, maybe you could just say a little something about what a complex system is, and how it informs your approach to cryptoeconomics.

Michael Zargham: Sure. Complex systems is sort of an interdisciplinary lens that allows you to sort of step back a minute and look at the system from multiple perspectives. So, I would say, the most important facet of the complex systems mindset is that you really can't fully know what's going to happen. So, you sort of relinquish this expectation that you can dictate exactly how the system will work. For sure, you can dictate how parts of it work, but those parts interact with other parts that you can't dictate; and thus, any number of unexpected, emergent properties could come out. And you can attempt to constrain that through something like the configuration space, as I mentioned earlier, but in general, you can't even begin to deal with that until you accept that degree to which you don't fully control this system. These are-- then, they're like cyber-physical systems, in the sense that you have to deal with the interplay between the technical layer, and the computational layer, and the social layer. And if you sort of try to overly reduce it to one of those problems, you could get something very different from what you intended. I think the couple facets that we deal with directly, like network systems, tend to be under complex systems. Nonlinear systems tend to be under complex systems. And in a sense, the complex systems community works hard to take patterns and elevate them to their modeled abstractions, and even still accept that those models are reductions, or lenses, or perspectives. And by taking many of them, you kind of fill in the picture, and that helps you say, "Ah, okay, here's what this would imply, if we think about it like this." And if I get another such lens, and it contradicts it, rather than say, "One of those things is right," I say, "Okay, these each give me a different view of the same phenomena. What can I do to either find a pareto optimal that trades off, or what can I do to lift this up and elevate it to the point where I can see the interplay between those two effects, and deal with them?" Right? Because, as an engineer, you have to make subjective design decisions, trade-off decisions, et cetera, and this is another reason why I think that the crypto space really needs the engineering mentality, because we need to move beyond this perception that the model and reality match, or this perception that we can make it do exactly what we want. Instead, we have to look at this from a requirements perspective, and understand, "What are the trade-offs? What's feasible? Okay, if I can't have exactly what I'd imagined in my head, among the things that I can make happen, which ones are preferable?" And it's those design trade-offs that I find. You know, my work with BlockScience, in particular, working with clients, they often need help understanding that there are actual trade-offs, that they can't just program it to do what they want; that there are economic considerations that are akin to energy conservation equations; that if you try to take it out somewhere, the system's going to adapt somewhere else. And it doesn't matter that you want it to work like that. There's going to be some sort of conservation baked in somewhere, and you'd be better off making it explicit and dealing with it, than trying to force it out and having it pop up somewhere unexpected. And again, this comes back to respecting those constraints, and actually working with them to achieve the desired outcomes. This is something that I think engineers, as professionals, as a discipline, have been trained to do, but maybe not so much in economics, or even in computer science, where I think there's a tendency to sort of represent things, assuming that the representation of the thing and the thing match; whereas an engineer has, pretty much from the beginning, taught that these are different, and that you manage the relationship between the representation of the thing and the thing.

Quinn DuPont: Mm-hm. Yeah. And so, one of the ways of not reducing this, that you point out in your paper, is to attend to the multiple scales that are going on. So, maybe you could tell me a little bit more about the micro, meso, and macro, and particularly, maybe, you can say something about how this approach gives you insight into some of the emergent properties, and what kind of emergent properties we might expect when we talk about cryptoeconomics.

Michael Zargham: Sure. So, I think the trick to the micro-meso-macro frame is that it's actually kind of a fractal. Scale considerations are effectively continuous, in a sense. There's different points of focus, where you can really zoom in and see what's going on, or that models kind of come clear. But ultimately, you're saying, if I examine the decision-making from an individual agent perspective, that's where we think about things on micro terms. And as I zoom out and look at, say, the system or subsystem, that would be sort of meso. And that's where I'm making policy decisions or designing mechanisms. And then, ultimately, at the macro scale, we're looking at the measurables, and sort of seeing, "Ah, in aggregate, the system seems to have these properties." And you can imagine a sort of feedback loop in these cryptoeconomic systems, where individual agent behavior effectively aggregates into global systems state. But then, if global systems state feeds back through the mechanisms or incentives on the agents themselves, then that loop actually doesn't-- it's not really agent-to-agent interactions generating the emergent phenomena; it's the agent-to-system and system-to-agent that create a lot of the phenomena. And it's kind of cool to see how these things happen at the same time, because you could almost think of as agent-to-agent as a kind of spatial distribution, or, you know, think in agent-based modeling terms, the agents competing and interacting with each other. But if the very shape of the game changes in response to the global state, you get a very different kind of dynamic game. And the example that I think is usually most accessible to people is the bitcoin hash-rate controller, where, in effect, the game of competing for bitcoin block rewards amounts to these sort of hash races, which are effectively probabilistic proportional to your hash power, but because of the difficulty controller feeding back, it actually means that the game is changing itself, as the players ramp up. And it creates this potential runaway scenario, where there is always an incentive to add more hash power, but at the same time, this was arguably the design goal of the system. It wanted to maximize for security at a time when it had very little, because there were very few miners, and by creating this economic incentive, the system drives itself towards ever more and more participants in the mining game. And this is actually one of those questions, where you're like, "Okay, well, that was perfect for the early-days scenario, but the question that stands out is then, to what extent is that the right solution?" As the mining rewards diminish over time, according to the mining schedule, and as the system gets larger and larger and larger, you could argue that expanding the security by one more miner, with however many more delta hash power in the race, is costing a certain amount of energy, effort, et cetera, relative to the marginal gain in security, which is the system goal. I'm not saying more security is bad, but this sort of engineering consideration starts to say something like, okay, at what point does more hash power equal more security, when, maybe, large-- we'll call them political blocks, are controlling large fractions of that, or mining pools, or even regions of the world? And we have to ask ourselves questions about, "Is that still the right policy?" And I think, in bitcoin's case, it's likely to go unchanged, largely because some of its power is in its relative immutability. I won't say "its complete immutability," because there are changes that happen through processes, that largely involve editing code, and you would say the politics of bitcoin is whether or not the majority of the hash power operators are choosing to take code changes. But, ultimately, that's still a governance process, and you could imagine a future where those operators chose to make decisions that impacted the hash game. But I would also argue that that could undermine the perception of its security, even if it improved its security. So, it's a whole big pile of layers of technology, economics, and politics that go into the ongoing management and maintenance of institutions, even if those are decentralized institutions.

Quinn DuPont: Let's talk a little bit more about these design goals. So, in your paper, you mentioned purpose-driven tokens, and they're able to, obviously, incentivize certain goals that get set out. But then you also sort of say, "Well, if we're being realistic here, we're going to necessarily have to be somewhat polycentric when it comes to these design goals and recognize that there might not be any one social optimum strategy here." What can you say about that, about these design goals and these purpose-driven tokens.

Michael Zargham: Yeah. So, I think the first thing to deal with purpose-driven tokens is to sort of address some semantics, where the purpose-driven token idea connects to this broader zeitgeist that tokens could be used as instruments for coordination of labor or effort, whether it's computational labor or human labor; and that we use the term "purpose-driven tokens" to really highlight the fact that the tokens are not so much the token itself, but they are tools for achieving some end. And you'll see the same constructs that we would call purpose-driven tokens labeled anything from a native token to a protocol token, to a utility token, to a coordination token, to a-- you know, the names are aplenty, because all we're really saying is that the token is effectively a... a-- how would you say? It is worked backwards to achieve something, rather than-- you don't drop it in, and say, "Well, whatever happens." You say, "We want the following type of coordination to emerge, so let's try to work out what instrument will facilitate that." So, it's not a piece of capital, in the sense of, "I'm accumulating it for-- in a sort of high-moneyness way," but rather something that is intended to be, at least in my opinion, commodity-like, in the sense of a productive capacity. So, you would use a token design that was meant to achieve some sort of coproduction, some sort of export; and that, although there's sort of financial incentives implied, those financial incentives are really meant to kind of align vectors, to get people moving in the same direction, or moving together, or completing tasks. Or we might have tasks that have multiple types of discrete effort that are different in the resources, skills, et cetera, that you need. You kind of get them to group up and do the thing together, and mutually align their incentives, maybe the way that a general contractor might, with a bunch of subcontractors, right? There's money involved, but you need some machinery to line up separate actors to work together, to accomplish a shared goal. And so, the way that I think about purpose-driven tokens is as kind of like automating away the layer of a management or administrative process that is required for aligning individual decision-makers. That does involve economic activity, but it's not necessarily like a finance activity. It's more of a-- I don't know, like-- this is maybe an overly strong analogy, but like a factory floor, where you need to coordinate it, in order to accomplish some throughput, but less optimized and more resilient. So, factory floors are not inherently resilient to perturbations. If you move a piece of machinery around or swap the order in which two things are sitting, everything can go to crap. But if you're dealing with something that is more decentralized or more organic, then you would potentially be quite resilient to minor perturbations or things that are outside of your control; one particular entity entering or leaving. I tend to find that the analogies like the factory floor start to break down for that reason, but ecological analogies hold up really well. They tend to have more redundancy and more resilience.

Quinn DuPont: Yeah. So, to pick up on that note of ecological analogies, maybe you could say something about this very fascinating idea of computational social science, and how a cryptoeconomic system might be part of understanding the social scientific world.

Michael Zargham: Yeah. So, I think, for this, we need to orient engineering and science adjacent to each other. So, one thing I like to do is think of, you know, science is producing knowledge using technology, and engineering is producing technology using knowledge. And so, they're sort of almost dual to each other. And in this case, computational social science is effectively the science side of this equation. So, when I was talking, earlier, about our computational models and complex adaptive dynamics, we are basically using methods in computational social science; so, observing real data, trying to understand phenomena, then basically abstracting those phenomena into models, and then running those models through what is effectively counterfactuals, or alternate worlds. Because once you input-- once you impose or put in some new rules, or you put those players into a new game of sorts, they are going to follow, maybe, similar heuristics, or they might have similar characteristics themselves, but those same characteristics generate different behaviors, because they're in a different world. And so, the computational social science side of this is about taking the best practice of methods and tools in mixing social scientific observations with modeling and simulation, as well as data analysis, to understand the social phenomena. And then we put that in its role within the design, or the evaluation, of an engineered system. So, we create that feedback loop between science and engineering in practice, not just in theory. We exercise our scientific skills, or our computational social scientific skills, in the simulation of the-- what might happen, given the design, as well as for projecting into the future of existing systems, and analyzing interventions or changes. And it's actually a relatively well-known phenomena in social science that you do a policy change, and you often get a different result than what you expected, generally due to feedback loops that were unaccounted for in the models. And so, our computational social science work is largely the complement to our engineering work. I'd like to make a call to action, with regards to engineers who might be listening to this, and that is to get involved, because I think one of the biggest needs in this space is not just the engineering skill set, but the engineering sort of ethics, the engineering process and mindset. Because I think we really do need-- and I know I spoke about it a little bit earlier-- people who are thinking about this from a-- not just a public goods in the sort of nonprofit sense, but a public goods in the infrastructure sense. You know, thinking in terms of building things for the benefit of people, of the users. And I think that the engineers have the social institution for managing this kind of infrastructure, not just the technical skills. And so, I would very strongly invite people who are, at least, interested in these new technologies to bring with them their existing expertise around designing, building, operating, and governing infrastructure.

Brian Walker: Thank you for listening to our interview with Dr. Michael Zargham. To learn more about the IEEE Blockchain Initiative, please visit our Web portal at